cross-posted from: https://hexbear.net/post/4742235

Democratization of Capitalist Values

Democratization is a word often used with technological advancement and the proliferation of open-source software. Even here, the platform under which this discussion is unfolding, we are participating in a form of “democratization” of the means of “communication”. This process of “democratization” is often one framed as a kind of universal or near universal access for the masses to engage in building and protecting their own means of communication. I’ve talked at length in the past about the nature of the federated, decentralized, communications movement. One of the striking aspects of this movement is how much of the shape and structure this democratization of communication shares with the undemocratic and corporate owned means of communication. Despite being presented with the underlying protocols necessary to create a communication experience that fosters true community, the choice is made instead to take the shape and structure of centralized, corporate owned speech and community platforms and “democratize” them, without considering the social relations engendered by the platforms.

As Marxists, this phenomenon isn’t something that should seem strange to us, and we should be able to identify this phenomenon in other instances of “democratization”. This phenomenon is what sits at the heart of Marxist analysis, and it is the relationship between the Mode of Production and the Super Structure of society. These “democratized” platforms mirror their centralized sisters, and are imbued with the very same capitalist values, in an environment that stands in conflict with those very same values. If this means of democratization of online community and communication was to be truly democratic, it would be a system that requires the least amount of technical knowledge and resources. However, those operators that sit at the top of each of these hosted systems exist higher on the class divide because they must operate a system designed to work at scale, with a network effect at the heart of its design. This is how you end up with the contradictions that lay under each of these systems. Mastodon.org is the most used instance, and its operators have a vested interest in maintaining that position, as it allows them and their organization to maintain control over the underlying structure of Mastodon. Matrix.org is the most used instance for its system for extremely similar reasons. Bluesky has structured itself in such a way that sits it on the central throne of its implementation. They have all obfuscated the centralization of power by covering their thrown with the cloak of “democratization”. Have these systems allowed the fostering of communities that otherwise drown in the sea of capitalist online social organizing? There is no doubt. Do they require significant organizational effort and resources to maintain? Absolutely. Are they still subject to a central, technocratic authority, driven by the same motivations as their sister systems? Yes, they are.

This brings me to AI, and it’s current implementation and design, and it’s underlying motivations and desires. These systems suffer from the same issues that this very platform suffer from, which is, that they are stained with the values of capital at their heart, and they are in no means a technology that is “neutral” in its design or its implementation. It is foolish to say that “Marxists have never opposed technological progress in principle”, in that this statement also handwaves away the critical view of technology in the Marxist tradition. Marx spends more than 150 pages—A tome in its own right—on the subject of technology and technological advancement under Capitalism in Volume 1 of Capital. Wherein he outlines how the worker becomes subjugated to the machine, and I find that this quote from Marx drives home my position, and I think the position of others regarding the use of AI in its current formation (emphasis mine).

The lightening of the labour, even, becomes a sort of torture, since the machine does not free the labourer from work, but deprives the work of all interest. Every kind of capitalist production, in so far as it is not only a labour-process, but also a process of creating surplus-value, has this in common, that it is not the workman that employs the instruments of labour, but the instruments of labour that employ the workman.

— Capital Volume 1, Production of Relative Surplus Value\Machinery and Modern Industry\Section 4: The Factory

What is it, at the core of both textual and graphical AI generation, that is being democratized? What has the capitalist sought to automate in its pursuit of Large Language Model research and development? It is the democratization of skill. It is the alienation of the Artist from the labor of producing art. As such, it does not matter that this technology has become “democratized” via open-source channels because at the heart of the technology, it’s intention and design, it’s implementation and commodification, lay the alienation of the artist from the process of creating art. It is not the “democratization” of “creativity”. There are scores of artists throughout our history whose art is regarded as creative despite its simplicity in both execution and level of required skill.

One such artist who comes to mind is Jackson Pollock, an artist who is synonymous with paint splattering and a major contributor to the abstract expressionist movement. His aesthetic has been described as a “joke” and void of “political, aesthetic, and moral” value, used as a means of denigrating the practice of producing art. Yet, it is like you describe in your own words, “Creativity is not an inherent quality of tools — it is the product of human intention”. One of the obvious things that these generative models exhibit is a clear and distinct lack of intention. I believe that this lack of “human intention” is explicitly what drives people’s repulsion from the end product of generative art. It also becomes “a sort of torture” under which the artist becomes employed by the machine. There are endless sources of artists whose roles as creators have been reduced to that of Generative Blemish Control Agents, cleaning up the odd, strange, and unintentioned aspects of the AI process.

Capitalist Mimicry and The Man In The Mirror

One thing often sighted as a mark in favor of AI is the emergence of Deepseek onto the market as a direct competitor to leading US-based AI Models. Its emergence was a massive and disruptive debut, slicing nearly $2-trillion in market cap off the US Tech Sector in a mater of days. This explosive out of the gate performance was not the result of any new ideologically driven reorientation in the nature and goal of generative AI modeling philosophy, but instead of the refinement of the training processes to meet the restrictive conditions created by embargos on western AI processing technology in China.

Deepseek has been hailed as what can be achieved under the “Socialist Model” of production, but I’m more willing to argue that this isn’t as true as we wish to believe. China is a vibrant and powerful market economy, one that is governed and controlled by a technocratic party who have a profound understanding of market forces. However, their market economy is not anymore or less susceptible to the whims of capital desires than any other market. One prime example recently was the speculative nature of their housing market, which the state is resolving through a slow deflation of the sector and seizure of assets, among other measures. I think it is safe to argue that much of the demands of the Chinese market economy are forged by the demands of external Capitalist desires. As the worlds forge, the heart of production in the global economy, their market must meet the demands of external capitalist forces. It should be remembered here, that the market economy of China operates within a cage, with no political influence on the state, but that does not make it immune to the demands and desires of Capitalists at the helm of states abroad.

Yes. Deepseek is a tool set released in an open-source way. Yes, Deepseek is a tool set that one can use at a much cheaper rate than competitors in the market, or roll your own hosting infrastructure for. However, what is the tool set exactly, what are its goals, who does it benefit, and who does it work against? The incredible innovation under the “Socialist model” still performs the same desired processes of alienation that capitalists in the west are searching for, just at a far cheaper cost. This demand is one of geopolitical economy, where using free trade principles, Deepseek intends to drive demand away from US-based solutions and into its coffers in China. The competition created by Deepseek has ignited several protectionist practices by the US to save its most important driver of growth in its economy, the tech sector. The new-found efficiency of Deepseek threatens not just the AI sector inside of tech, but the growing connective tissue sprung up around the sector. With the bloated and wasteful implementation of Open AI’s models, it gave rise to growing demand for power generation, data centers, and cooling solutions, all of which lost large when Deepseek arrived. So at its heart, it has not changed what AI does for people, only how expensive AI is for capitalists in year-to-year operations. What good is this open-source tool if what is being open sourced are the same demands and desires of the capitalist class?

Reflected in the production of Deepseek is the American Capitalist, they stand as the man in the mirror, and the market economy of China as doing what a market economy does: Compete for territory in hopes of driving out competition, to become a monopoly agent within the space. This monopolization process can still be something in which you distribute through an open-source means. Just as in my example above, of the social media platforms democratizing the social relations of capitalist communal spaces, so too is Deepseek democratizing the alienation of artists and writers from their labor.

They are not democratizing the process of Artists and Laborers training their own models to perform specific and desired repetitive tasks as part of their own labor process in any form. They hold all the keys because even though they were able to slice the head from the generative snake that is the US AI Market, it still cost them several million dollars to do so, and their clear goal is to replace that snake.

A Renaissance Man Made of Metal

Much in the same way that the peasants of the past lost access to the commons and were forced into the factories under this new, capitalist organization of the economy, the artist has been undergoing a similar process. However, instead of toiling away on their plots of land in common, giving up a tenth of their yield each year to their lord, and providing a sum of their hourly labor to work the fields at the manor, the Artist historically worked at the behest of a Patron. The high watermark for this organization of labor was the Renaissance period. Here, names we all know and recognize, such as da Vinci, Michelangelo, Raphael, and Botticelli were paid by their Patron Lords or at times the popes of Rome to hone their craft and in exchange paint great works for their benefactors.

As time passed, and the world industrialized, the system of Patronage faded and gave way to the Art Market, where artists could sell their creative output directly to galleries and individuals. With the rise of visual entertainment, and our modern entertainment industry, most artists’ primary income stems from the wage labor they provide to the corporation to which they are employed. They require significant training, years and decades of practice and development. The reproduction of their labor has always been a hard nut to crack, until very recently. Some advancements in mediums shifted the demand for different disciplines, 2D animators found themselves washed on the shores of the 3D landscape, wages and benefits depleted, back on the bottom rung learning a new craft after decades of momentum via unionization in the 2D space. The transition from 2D to 3D in animation is a good case study in the process of proletarianization, very akin to the drive to teach students to code decades later in a push for the STEM sector. Now, both of these sectors of laborers are under threat from the Metal Renaissance Man, who operates under the patronage of his corporate rulers, producing works at their whim, and at the whim of others, for a profit. This Mechanical Michelangelo has the potential to become the primary source of artistic and—in the case of code—logical expression, and the artists and coders who trained him become his subordinates. Cleaning up the mistakes, and hiding the rogue sixth finger and toe as needed.

Long gone are the days of Patronage, and soon too long gone will be the days of laboring for a wage to produce art. We have to, as revolutionary Marxists, recognize that this contradiction is one that presents to artists, as laborers, the end of their practice, not the beginning or enhancement of that practice. It is this mimicry that the current technological solutions participate in that strikes at the heart of the artists’ issue. Hired for their talent, then, used to train the machine with which they will be replaced, or reduced. Thus limiting the economic viability of the craft for a large portion of the artistic population. The only other avenue for sustainability is the Art Market, which has long been a trade backed by the laundering of dark money and the sound of a roulette wheel. A place where “meritocracy” rules with an iron fist. It is not enough for us to look at the mechanical productive force that generative AI represents, and brush it aside as simply the wheels of progress turning. To do so is to alienate a large section of the working class, a class whose industry constitutes the same percentage of GDP as sectors like Agriculture.

I have no issue with the underlying algorithm, the attention-based training, that sits at the center of this technology. It has done some incredible things for science, where a focused and specialized use of the technology is applied. Under an organization of the economy, void of capitalist desires and the aims to alienate workers from their labor, these algorithms could be utilized in many ways. Undoubtably, organizations of ones like the USSR’s Artist Unions would be central in the planning and development of such technological advancement of generative AI technology under Socialism. However, every attempt to restrict and manage the use of generative AI today, is simply an effort to prolong the full proletarianization process of the arts. Embracing it now only signals your alliance to that process.

  • amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    I think in general, the focus of the discussion needs to move away from “democratization” as phrasing. It seems like it gets used as a contrast to hegemony, but I’m not sure it’s actually what we’re talking about. Maybe it’s a pedantic thing (but in my defense, much of this discussion is very pedantic). My understanding of democratic would be that there’s an organized process going on. What seems to be the subject of discussion is more like free-for-all vs. corporate capture. How can we even talk about democratic processes in the context of living within a capitalist society, if we don’t control the means of production and if we are talking about individuals making choices, without even any kind of party direction or discipline behind it?

    Perhaps this is where some of my confusion comes from as to the staunchness of “anti” that some people have about certain types of AI. Suppose the conclusion from all of this were to be that this technology in the generative form is currently inseparable from capitalism and its exploitation, and that nothing good can come from it while capitalism is the dominant economic mode. By what means are we supposed to oppose it and to what end? Boycotting works for some things, but it’s not even all that mainstream a tech to begin with and companies are pumping tons of money into it as aspirational tech for a market that hasn’t fully materialized yet. Creating an environment of shaming and fear around it, which seems to be the more common direction of those who fervently oppose it, only sends the message that if you do want to use it, then don’t tell anyone that you are doing so. It doesn’t send a message in clear and simple terms as to why it is worth boycotting and what outcome is going to be achieved in concrete terms and how this outcome can be achieved in the first place. Nor does it attempt to engage with, or investigate, the reasons why some are drawn to it in the first place. (This is not a criticism of you, OP, but a criticism of the general atmosphere of how anti-generative-AI often seems to look in practice.)

    Perhaps in part because of its sometimes-intersection with NFT and crypto grifters in marketing, it seems to have become something of an anti-capitalist’s punching bag. A kind of easy target for directing ire at big tech and the problems of capitalism. But many of the problems that come up in relation to it already existed to some degree prior to AI, such as the tendency for the arts to be more about mass-producing things criticized as low quality than about anything inherently artistic. Or how the western internet (I can’t speak for elsewhere) has ridiculous competition and content mill patterns of behavior. Generative AI accelerates and exacerbates these problems, but it is not the originator of them. So it can come out a bit scapegoating in how it looks. A lot of energy expended in saying “this is too far” as if the other stuff hasn’t already been happening and leading to it.

    I don’t want to trivialize the downsides, but at the same time, I’m concerned that it is much ado about nothing relative to the larger problems of capitalism that need addressing. That those who oppose generative AI are themselves getting swept up in some of the marketing and believing it a bit too much; treating its potential as faster-moving and larger than it is.

    • RedWizard [he/him, comrade/them]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      There are two things at play here, that, I think, need to be addressed. There is the methodology which can be applied to write a computer program that can “learn” a task, and then there is the interactive experiences that allow you to create your own output with the pretrained model. The former is a tool in all regards, one that is “neutral” in its composition because it can be applied in broad ways in many different disciplines. The latter — a model trained on terabytes of art — exists only to alienate the artist as the wage labor worker in an attempt to reduce the overall reproduction cost of their labor. These “neural networks” can be trained to do all kinds of tasks, and in 2014 a paper was released entitled “Attention is all you need” in which most of these generative tool sets sit upon. A great example of this undelaying technology being used outside the creative fields is the work being done on proteins.

      The researchers trained and tested ProtGPS on two batches of proteins with known localizations. They found that it could correctly predict where proteins end up with high accuracy. The researchers also tested how well ProtGPS could predict changes in protein localization based on disease-associated mutations within a protein. Many mutations — changes to the sequence for a gene and its corresponding protein — have been found to contribute to or cause disease based on association studies, but the ways in which the mutations lead to disease symptoms remain unknown.

      Figuring out the mechanism for how a mutation contributes to disease is important because then researchers can develop therapies to fix that mechanism, preventing or treating the disease. Young and colleagues suspected that many disease-associated mutations might contribute to disease by changing protein localization. For example, a mutation could make a protein unable to join a compartment containing essential partners.

      This is clearly an interesting and potentially revolutionary shift in medical technology, one that improves lives. I would see no objection to the use of this tool for this intention, it is clearly a net positive for the world. Object detection is another application for this technology that has broad use cases in terms of safety, but that same training data can be weaponized in automated weapon targeting systems as well. It is the intention that is the issue at hand here. What is the intention of building a model that can reproduce finished works of “art”? We would ask the same question about an AI model trained to identify an individual instantly simply by seeing portions of their face. What is the intention of this model, and why are we using it to identify people?

      As you place it: “such as the tendency for the arts to be more about mass-producing things criticized as low quality than about anything inherently artistic.” You are conflating two entirely different forms of artistic expression. There is art produced by laborers in what amounts to art factories, and then there is the more academic and philosophical process that creates Art. The objections I see surrounding AI art is from the former, not the latter. Artists who complain that an AI model has appropriated their “style” come from a place of knowing it diminishes their market appeal. Some artists are hired explicitly because they bring a unique style to a project. If an AI model can simply appropriate that style, however, then there is no need to hire this artist to begin with. This is not even an imaginative example, it is an example I have seen expressed over and over. In fact, when an artist or a team of artists works with the attention-based training algorithms to produce interesting media, I have no real issue with this, but I also know that for someone to truly own their labor in that way is incredibly difficult to do so, if not entirely impossible.

      I don’t want to trivialize the downsides, but at the same time, I’m concerned that it is much ado about nothing relative to the larger problems of capitalism that need addressing. That those who oppose generative AI are themselves getting swept up in some of the marketing and believing it a bit too much; treating its potential as faster-moving and larger than it is.

      I don’t think you’re trivializing at all. This is a constructive discussion, and it should be had. This technology has a kind of mist form to it, where the nuts and bolts of how and why it works, where the tool begins and the weapon ends, are difficult to parse and identify. I don’t exactly believe it is a “much to do about nothing”, however, because to train models that produce the kind of images, text, reasoning and logic like the ones we have now can only be done by capital owners. The mass market “tool set” will always be a weapon of the capitalist to displace wage laborers producing some of the more complicated forms of output, art and code. There doesn’t need to be a high level of potential or even a fast-moving development, it just has to be good enough for the consumer to accept uncritically, and I think the technology is good enough now that it passes that sniff test.

      • amemorablename@lemmygrad.ml
        link
        fedilink
        arrow-up
        4
        ·
        17 hours ago

        The point about reproduction of style is fair. Though I will say, when it comes to intent, its inclusion appears to be a bit more complicated than wanting to replace artists and their style. These models depend on vast quantity and quality of data to be generalizable, and then they suffer from janky ways of interfacing with the model in order to get what you want; artist styles are one way to help with the problem that most datasets are not organically going to label style in a meticulous way and trying to do so manually for a dataset at scale could be a years long organized effort, requiring input from professional artists (e.g. to get to the point that you could sort of compose a style of very fine grain elements). And if there is no style control, then it is more so just a gacha dopamine generator, which would still be entertaining to some I imagine, but in my experience, people usually want some semblance of control over what they’re getting. So far, the best I’ve seen on this is a case where the model can create a kind of blend out of multiple prompted for artist styles and some people focus on using this in order to be creative and make unique blends out of it, rather than just copying an artist’s style outright.

        Loras are more of an area that I could see being used with express intent to copy, especially because they can be applied to an already-trained model (they don’t require a full retrain of a model from the ground up for it to learn a particular style). Though they are also more feasible for a singular artist to learn how to do and do on a consumer GPU, provided the model is open source, and I think services tend to shy away from offering a lora creation process for the user, if for no other reason than the loopholes it opens up into the limitations their models have (e.g. preventing people from making images that are actually illegal).

        None of this is to say it isn’t a problem and I don’t doubt that corporations are looking at ways to streamline and reduce workload. But I don’t think the tech is at a point where that process is as straightforward as it might seem. The question there for me is more so: to what extent will they try to push it anyway? The normalization of “content mill” “content” may make it easier to push it. But there is also the question of saturation. Though the creation of these models is mostly in the hands of capitalists, the usage of them is not. And how much use can the capitalists get out of streamlining via generative AI if people can just go to the models themselves and get the jank with their own personalized tweaks? Or if they can go to some small internet thing, where somebody makes the niche thing that they like?

        I’m sure some kind of process is going to be attempted, similar to what happened with streaming, where they will try to drive people to AI with the offering of free and/or cheap services, and then try to drive up the price when they have a big market hold. But this, I think, is where the significance of events like Deepseek come in. Or before that, the Stable Diffusion impact on open source that had that leaked thing from google about “we have no moat”. The more efficient the tech gets, the harder it will be for them to do the industry capture thing and the more it starts leaning in the direction that the average person may actually be able to train their own stuff, not just run it.

        I feel like I’m meandering a bit, but to return to the point about intent:

        It is the intention that is the issue at hand here. What is the intention of building a model that can reproduce finished works of “art”?

        This may be a bit murky. The answer seems obvious enough under the capitalist paradigm, that it’s about profit somehow and about replacing artists somehow. But I’m not sure it always started that way. From what I can find, the original “latent diffusion model” that started Stable Diffusion was made by researchers at Ludwig Maximilian University of Munich as part of the “CompVis Group”. This is what I can find on them: https://ommer-lab.com/

        Two of their listed affiliates, ELLIS and Helmholtz Foundation appear to have interests related to medicine and health. I did not check every affiliate in detail, but it’s interesting.

        Prior to that, there was also DALL-E by "Open"AI (starting out as non-profit and now being far from that). Apparently their original charter was like this:

        OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:

        https://web.archive.org/web/20230714043611/https://openai.com/charter

        So for OAI, even in its earliest development, there seems to be a clear intent to replace. Stable Diffusion origins are not as clear. The research it came from seems to have been innocent enough. I’m not finding easily what Stable Diffusion’s stated goals were though.

        This is aside from the fact that you could argue a model that “can reproduce finished works of “art”” has an implicit intent to replace. But in that regard, some of it may be the architecture itself. One would think that if that was always the express intent, they would have figured out how to do the same with text generation. But text generation are completion models (more naturally functional as “continuing text you give it”, so better as co-writers by default) and have to be specially tuned and setup up with special interface coding to act like a chatbot who can follow instructions. If I were talking personal preference, I’d prefer if image generation models were focused on being co-illustrators instead of start to finish. But it may be the pandora’s box is already too open now on their current design.