• 1 Post
  • 14 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle
  • Haikus in English have always felt like a gimmick to me more than anything else, FWIW. Now that I’m thinking about how Japanese flows with its syllables, they would probably make way more sense in that language because (for lack of a better way to put it) Japanese draws out each syllable more and languages like English more slur things together. So I imagine in Japanese, it’d make a lot more sense to have a particular syllabic limit and be getting much more out of it.


  • My point there wasn’t that modern art is bad. Just a side note about what contributed to it being pushed as much as it was at the time. Which apparently was the wrong tack anyway, since I guess you are not talking about popularity but something else.

    Is it possible you are on the autism spectrum? I mean nothing bad by that, to be clear. It’s just a kind of neurodivergence to me. But I ask because if I understand right, some people on the autism spectrum have this thing of taking things very literally. So I wonder because you mention picking up on literal things. The other thing I wonder is, is English your first language? That might contribute to English feeling clunky, if it isn’t your native tongue.

    As for enjoying poetry, I’m not sure what to say about that because I can write poetry myself and enjoy it to a point, but some of it feels very nonsense to me, like it’s hiding behind a lack of meaning with flowery prose. Lemme see if I can do an example:

    Leaves crunching send signals into the air,

    Of autumn’s arrival,

    Carried on an eagle’s cry,

    While blackened hearts live free or die.

    ^ I don’t know what this is supposed to mean. I strung together some stuff that sounds vaguely metaphorical and like it might have a deeper meaning.

    Or sometimes poetry can feel up its own ass with acting like it’s deeper than it is. But I do think it has a purpose, which is expressing things that can be hard to express otherwise:

    Emotions blend together like red and blue,

    But don’t make purple.

    Disparate and disconnected,

    Unable to find sequence,

    They show the DNA of traumatic suffering.

    Here I’m trying to express something about how confusing emotions can be sometimes and how they may be harmed at times by trauma.

    I don’t know if I’m making myself clear or better understanding your meaning at all, but there’s an attempt.


  • If I understand right what you’re getting at, I think some of that stuff comes from studying the medium and then having an appreciation for when you notice somebody is doing technically impressive stuff. The other form of it that I’m aware of, when people like read a story and get deep into analysis of its symbolism and stuff, seems like a good half bullshit at least; I say half because while they might be constructing a legitimate metaphor out of it, it’s probably not what the artist had in mind and is more likely some form of projection on their part.

    Ultimately, people like different things and sometimes for different reasons. And although there are consistent technical elements to a given craft (I’m not going to act like artforms are all random choice), there’s also a certain amount of going by feel and a certain amount of “why did this person’s work become famous but this person’s didn’t? dunno.”

    Then there are those times when we actually have an answer for why something got pushed as it did: https://www.independent.co.uk/news/world/modern-art-was-cia-weapon-1578808.html

    It was recognised that Abstract Expression- ism was the kind of art that made Socialist Realism look even more stylised and more rigid and confined than it was. And that relationship was exploited in some of the exhibitions.




  • The point about reproduction of style is fair. Though I will say, when it comes to intent, its inclusion appears to be a bit more complicated than wanting to replace artists and their style. These models depend on vast quantity and quality of data to be generalizable, and then they suffer from janky ways of interfacing with the model in order to get what you want; artist styles are one way to help with the problem that most datasets are not organically going to label style in a meticulous way and trying to do so manually for a dataset at scale could be a years long organized effort, requiring input from professional artists (e.g. to get to the point that you could sort of compose a style of very fine grain elements). And if there is no style control, then it is more so just a gacha dopamine generator, which would still be entertaining to some I imagine, but in my experience, people usually want some semblance of control over what they’re getting. So far, the best I’ve seen on this is a case where the model can create a kind of blend out of multiple prompted for artist styles and some people focus on using this in order to be creative and make unique blends out of it, rather than just copying an artist’s style outright.

    Loras are more of an area that I could see being used with express intent to copy, especially because they can be applied to an already-trained model (they don’t require a full retrain of a model from the ground up for it to learn a particular style). Though they are also more feasible for a singular artist to learn how to do and do on a consumer GPU, provided the model is open source, and I think services tend to shy away from offering a lora creation process for the user, if for no other reason than the loopholes it opens up into the limitations their models have (e.g. preventing people from making images that are actually illegal).

    None of this is to say it isn’t a problem and I don’t doubt that corporations are looking at ways to streamline and reduce workload. But I don’t think the tech is at a point where that process is as straightforward as it might seem. The question there for me is more so: to what extent will they try to push it anyway? The normalization of “content mill” “content” may make it easier to push it. But there is also the question of saturation. Though the creation of these models is mostly in the hands of capitalists, the usage of them is not. And how much use can the capitalists get out of streamlining via generative AI if people can just go to the models themselves and get the jank with their own personalized tweaks? Or if they can go to some small internet thing, where somebody makes the niche thing that they like?

    I’m sure some kind of process is going to be attempted, similar to what happened with streaming, where they will try to drive people to AI with the offering of free and/or cheap services, and then try to drive up the price when they have a big market hold. But this, I think, is where the significance of events like Deepseek come in. Or before that, the Stable Diffusion impact on open source that had that leaked thing from google about “we have no moat”. The more efficient the tech gets, the harder it will be for them to do the industry capture thing and the more it starts leaning in the direction that the average person may actually be able to train their own stuff, not just run it.

    I feel like I’m meandering a bit, but to return to the point about intent:

    It is the intention that is the issue at hand here. What is the intention of building a model that can reproduce finished works of “art”?

    This may be a bit murky. The answer seems obvious enough under the capitalist paradigm, that it’s about profit somehow and about replacing artists somehow. But I’m not sure it always started that way. From what I can find, the original “latent diffusion model” that started Stable Diffusion was made by researchers at Ludwig Maximilian University of Munich as part of the “CompVis Group”. This is what I can find on them: https://ommer-lab.com/

    Two of their listed affiliates, ELLIS and Helmholtz Foundation appear to have interests related to medicine and health. I did not check every affiliate in detail, but it’s interesting.

    Prior to that, there was also DALL-E by "Open"AI (starting out as non-profit and now being far from that). Apparently their original charter was like this:

    OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:

    https://web.archive.org/web/20230714043611/https://openai.com/charter

    So for OAI, even in its earliest development, there seems to be a clear intent to replace. Stable Diffusion origins are not as clear. The research it came from seems to have been innocent enough. I’m not finding easily what Stable Diffusion’s stated goals were though.

    This is aside from the fact that you could argue a model that “can reproduce finished works of “art”” has an implicit intent to replace. But in that regard, some of it may be the architecture itself. One would think that if that was always the express intent, they would have figured out how to do the same with text generation. But text generation are completion models (more naturally functional as “continuing text you give it”, so better as co-writers by default) and have to be specially tuned and setup up with special interface coding to act like a chatbot who can follow instructions. If I were talking personal preference, I’d prefer if image generation models were focused on being co-illustrators instead of start to finish. But it may be the pandora’s box is already too open now on their current design.


  • I think in general, the focus of the discussion needs to move away from “democratization” as phrasing. It seems like it gets used as a contrast to hegemony, but I’m not sure it’s actually what we’re talking about. Maybe it’s a pedantic thing (but in my defense, much of this discussion is very pedantic). My understanding of democratic would be that there’s an organized process going on. What seems to be the subject of discussion is more like free-for-all vs. corporate capture. How can we even talk about democratic processes in the context of living within a capitalist society, if we don’t control the means of production and if we are talking about individuals making choices, without even any kind of party direction or discipline behind it?

    Perhaps this is where some of my confusion comes from as to the staunchness of “anti” that some people have about certain types of AI. Suppose the conclusion from all of this were to be that this technology in the generative form is currently inseparable from capitalism and its exploitation, and that nothing good can come from it while capitalism is the dominant economic mode. By what means are we supposed to oppose it and to what end? Boycotting works for some things, but it’s not even all that mainstream a tech to begin with and companies are pumping tons of money into it as aspirational tech for a market that hasn’t fully materialized yet. Creating an environment of shaming and fear around it, which seems to be the more common direction of those who fervently oppose it, only sends the message that if you do want to use it, then don’t tell anyone that you are doing so. It doesn’t send a message in clear and simple terms as to why it is worth boycotting and what outcome is going to be achieved in concrete terms and how this outcome can be achieved in the first place. Nor does it attempt to engage with, or investigate, the reasons why some are drawn to it in the first place. (This is not a criticism of you, OP, but a criticism of the general atmosphere of how anti-generative-AI often seems to look in practice.)

    Perhaps in part because of its sometimes-intersection with NFT and crypto grifters in marketing, it seems to have become something of an anti-capitalist’s punching bag. A kind of easy target for directing ire at big tech and the problems of capitalism. But many of the problems that come up in relation to it already existed to some degree prior to AI, such as the tendency for the arts to be more about mass-producing things criticized as low quality than about anything inherently artistic. Or how the western internet (I can’t speak for elsewhere) has ridiculous competition and content mill patterns of behavior. Generative AI accelerates and exacerbates these problems, but it is not the originator of them. So it can come out a bit scapegoating in how it looks. A lot of energy expended in saying “this is too far” as if the other stuff hasn’t already been happening and leading to it.

    I don’t want to trivialize the downsides, but at the same time, I’m concerned that it is much ado about nothing relative to the larger problems of capitalism that need addressing. That those who oppose generative AI are themselves getting swept up in some of the marketing and believing it a bit too much; treating its potential as faster-moving and larger than it is.






  • Anyone who is familiar with getting into programming/coding field, especially without prior paid work experience in it? I’m US-based. Have some self taught experience, I would say I’m beyond novice albeit rusty, but proving that on a resume is another thing and I might be a better fit for entry level for that reason. Part of the problem is, a lot in my area last I looked tends to be military contractor stuff and I do not want to go work on weapons tech or whatever. I guess I could look for remote stuff?

    Does anyone know of specific opportunities? (Not asking people to google search for me, just wondering on that level if anyone already knows of something.)


  • china has media control.

    This is just “communists are necromancers / mind controllers / Cthulhu delegates” kind of monster under the bed nonsense coming from Cold War propaganda era narrative and continuing on to this day. No state has 100% control over media and the reality is that after the initial push of the narrative (which primarily came from a “dude trust me” singular individual), the credibility of the claim slowly unraveled. This is in contrast to a history like, for example, US in Korea, where it started out as the US portraying it as for good and info on the US occupation’s atrocities came out over time, as people whistleblew and people investigated. Or how many people were duped into viewing the israeli colony as “it’s complicated” rather than colonialism doing what colonialism does (extermination of the natives) and only got through the bubble when raw first person information on atrocities committed by israel started coming through. In israel’s case, it’s not just that they are so brazen about it at this stage, it’s also the painstaking documentation that Palestinians have done, some of them getting targeted and martyred for daring to report on the crimes of the settlers.

    There is no such equivalent evidence going on with China and claiming ultimate powers of censorship as the reason is, well… there’s a Parenti quote that comes to mind about an unfalsifiable orthodoxy. The absurdity of this kind of argument that happens in anti-communist propaganda goes something like:

    1. The communists are evil. 2) If the evidence is clear and available, then you are evil if you don’t condemn them. 3) If the evidence is not clear and available, then it must be because the communists are totalitarian masterminds and are hiding the evidence from us. And obviously one who would support totalitarian masterminds is evil themself.

    It’s one of those things like that saying, “You can’t reason someone out of a belief they haven’t reasoned themself into.” Much of anti-communist propaganda begins with an implied truism that communism must be evil and then reasons about how it is evil after the fact. By comparison, no such nonsense is necessary to criticize colonialism, for example; its crimes are well documented over hundreds of years and it depends on such narratives as saying that one people are civil and another savage, and so the savages “deserve what they get” (aka: dehumanization).

    You will not find such narratives in the practice of communism because it goes out of its way to humanize and work to eliminate class and caste barriers. So its opponents simply make up atrocities committed in its name and claim that it’s a faked humanitarian goal.