Valerie Veatch is a filmmaker. That distinction matters more than it might seem, because filmmakers are trained to notice what gets left out of a frame just as much as what gets put in. When OpenAI released its Sora text-to-video model to the public in 2024, Veatch did what a lot of curious, skeptical artists did: she looked. She joined the online communities where creators were sharing their generations, watched the outputs accumulate, and started asking the questions that most breathless tech coverage was not asking. What she found troubled her in ways that went beyond the usual debates about copyright or job displacement.
The generative AI conversation has, for years now, been dominated by two camps: enthusiastic adopters who see these tools as democratizing creativity, and critics who focus on intellectual property theft and labor disruption. Both framings are legitimate. Neither one is sufficient. What Veatch's perspective points toward is something older and more structurally dangerous: the idea that certain kinds of human expression, certain bodies, certain aesthetic traditions, and certain ways of making meaning are worth preserving in a training set, and others simply are not. That is not a neutral technical decision. It is a curatorial one, made by a small number of people at a small number of companies, and it carries consequences that compound over time.
Generative AI models like Sora learn from data. The outputs they produce reflect the inputs they were trained on, which means they reflect the choices, biases, and blind spots of whoever assembled those inputs. When a model consistently renders human bodies in particular ways, gravitates toward particular visual languages, or struggles to represent certain cultural aesthetics with any fidelity, that is not a bug in the colloquial sense. It is the system working exactly as designed, just with values baked in that were never made explicit to the public.
This is where the eugenics framing, uncomfortable as it is, becomes analytically useful rather than merely provocative. Eugenics was never only about biology. At its core, it was about deciding which human traits, which lineages, which ways of being were worth perpetuating and which could be quietly allowed to disappear. A training dataset that systematically underrepresents disabled artists, Indigenous visual traditions, non-Western aesthetic forms, or working-class creative cultures is making a version of that same determination, just through the language of data curation rather than science. The mechanism is different. The logic rhymes.
The second-order consequence here is one that the industry is poorly equipped to reckon with. As generative AI tools become cheaper and more accessible, they will increasingly shape what gets made, not just how it gets made. Platforms optimized for AI-generated content will reward outputs that the model produces fluently, which means rewarding the aesthetic preferences already embedded in the model. Human creators who work in traditions the model handles poorly will face a structural disadvantage that has nothing to do with the quality of their work. Over time, this creates a feedback loop: underrepresented aesthetics generate less engagement on AI-optimized platforms, which reduces the incentive to include them in future training data, which makes the next model even less capable of representing them.
Veatch's experience matters precisely because she came to Sora with curiosity rather than hostility. She was not looking for reasons to reject the technology. She was looking for what it could do. What she found, and what a growing number of artists across disciplines are articulating, is that the promise of democratization conceals a more complicated reality. Yes, these tools lower certain barriers to entry. They also raise new ones, and the new barriers tend to fall along lines that are depressingly familiar.
The artists building critical frameworks around generative AI are not Luddites. Many of them use digital tools extensively. What they are resisting is the specific claim that these systems are neutral, that they simply reflect human creativity in aggregate, that any disparities in their outputs are incidental rather than structural. That claim does not survive contact with how these systems are actually built.
OpenAI and its peers are not going to solve this problem voluntarily, at least not at the speed or scale the problem demands. The incentive structure points in the opposite direction: broader, more homogenized outputs are easier to monetize, easier to moderate, and easier to defend legally. The artists asking hard questions about whose creativity gets encoded into these systems, and whose gets quietly filtered out, are doing work that regulators, journalists, and the public have barely begun to catch up with. The question is not whether generative AI will reshape the creative landscape. It already is. The question is who gets to exist in the landscape it builds.
References
- Bender et al. (2021) β On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
- Crawford, K. (2021) β Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence
- Buolamwini, J. & Gebru, T. (2018) β Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
- OpenAI (2024) β Sora: Creating video from text
Discussion (0)
Be the first to comment.
Leave a comment