• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • ClamDrinker@lemmy.worldtoMemes@sopuli.xyzwho are you?
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    1 day ago

    It also very much depends on your country, food authority, and retailer. Some food authorities have stricter categories for very perishable foods where unless it has gone very bad, you can’t see it’s not suitable for consumption anymore, eg. meat and vegetable. And while the producer has an incentive to encourage waste, the retailer has the incentive to reduce it, as you typically can’t sell items to consumers that are no longer within date (Again, depending on your location). If an item is unreasonably often thrown out by the retailer, that leads to consequences in the deals being made between the retailer and the producer, which pushes the producer not to be too inaccurate either.


  • I do not share your experience about people that despise AI talking about it more, but if your community does, that’s great. But I am kind of skeptical that really is the case because of some of your statements.

    Most communities I see like that are incredibly rude and dismissive of people that see the positive sides of the technology, and even objective statements about the technology, are dismissed because they are not negative of the technology (eg. that AI is advancing medical research and healthcare, or also being used to stop scammers), and people that discuss that are mocked or ostracized by those groups. It’s cult like behavior, where only the group opinion is allowed. And if you even dare like something that was made with AI despite more and more media such as games uses it, even if you still have reasonable objections, oh boy.

    I highly disagree with your statement that hate and anger spreads an opinion far more easily, because it contains an assumption that people agree on it ahead of time. Take racism. I hope you’re a nice person, so seeing a wildly racist post hating on X people, show up on your feed isn’t suddenly going to make you think “Huh maybe they have a point, X people are to be hated.”, it just makes you very angry and resentful in return, with an opposing opinion, aka polarization. And that kills the conversation. For racism that’s kind of warranted, since the person with the irrational hatred isn’t to be taken seriously. And regardless of if the position is pro, neutral, or anti AI, if it is defended with irrationality, they will be the ones in this analogy. I equivalently denounce people that have no respect for artists and see AI as a way to kill the creative industry as I denounce people that pretend nothing good can ever come from AI and everyone that uses it is without a conscience or has no feeling for creativity.

    As for your points about fighting it, I cannot find any point in it that I agree with. Three or four years ago I would have entertained the notion that it might go away, but it has been showing up all over society. It’s an unattainable goal. Even if it somehow got banned in one country, that does not stop other countries around the world, with different cultures and values from using it, nor stop bad actors from using it so long as it cannot be proven to be AI. It’s like thinking because drugs are illegal, nobody is doing drugs. And to drive that point even further, positive uses such as certain drugs ending up being used for effective treatment of PTSD or chronic pain, end up being undiscovered. That’s the kind of world irrational reasoning builds.

    And by having an opinion that can only be satisfied by someone unequivocally agreeing with you, with no room for reasonable disagreeing on some aspects such as fair usage, it makes alliances that could actually get majorities to secure rights and fair treatment impossible.

    They do in the sense that all of them are driven by neophilia and big tent people horny for cash and power.

    See, this is the kind of statement I do denounce if you are saying this applies to AI, and why I don’t really believe you are in a community that reasonably discusses AI. It’s such a close minded statement that is only applicable to most big companies that use AI. It doesn’t respect artists that use it whose work has been systematically undervalued, nor researchers that use it for the common good, nor any other use that has a reasonable grounds to not be considered the same.


  • It can’t simultaneously be super easy and bad, yet also a massive propaganda tool. You can definitely dislike it for legitimate reasons though. I’m not trying to anger you or something, but if you know about #1, you should also know why it’s a good tool for misinformation. Or you might, as I proposed, be part of the group that incorrectly assumed they already know all about it and will be more likely to fall for AI propaganda in the future.

    eg. Trump posting pictures of him as the pope, with Gaza as a paradise, etc. These still have some AI tells, and Trump is a grifting moron with no morals or ethics, so even if it wasn’t AI you would still be skeptical. But one of these days someone like him that you don’t know ahead of time is going to make an image or a video that’s just plausible enough to spread virally. And it will be used to manufacture legitimacy for something horrible, as other propaganda has in the past.

    but why do we want it? What does it do for us?

    You yourself might not want it, and that’s totally fine.

    It’s a very helpful tool for creatives such as vfx artists and game developers, who are kind of masters of making things not real, seem real. The difference is, that they don’t want to lie or obfuscate what tools they use, but #2 gives them a huge incentive to do just that, not because they don’t want to disclose it, but because chronically overworked and underpaid people don’t also have time to deal with a hate mob on the side.

    And I don’t mean they use it as a replacement for their normal work, or just to sit around and do nothing, but they integrate it into their processes to enhance either the quality, or to reduce time spent on tasks with little creative input.

    If you don’t believe me that’s what they use it for, here’s a list of games on Steam with at least an 75% rating, 10000 reviews, and an AI disclosure.

    And that’s a self perpetuating cycle. People hide their AI usage to avoid hate -> making less people aware of the depths of what it can be used for, making them only think AI slop or other obviously AI generated material is all it can do -> which makes them biased towards any kind of AI usage because they think it’s easy to use well or just lazy to use -> giving people hate for it -> in turn making people hide their AI usage more.

    By giving creatives the room to teach others about what AI helped them do, regardless of wanting to like or dislike it, such as through behind the scenes, artbooks, guides, etc. We increase the awareness in the general population about what it can actually do, and that it is being used. Just imagine a world where you never knew about the existence of VFX, or just thought it was used for that one stock explosion and nothing else.

    PS. Bitcoin is still around and decently big, I’m not a fan of that myself, but that’s just objective reality. NFTs have always been mostly good for scams. But really, these technologies have little to no bearing on the debate around AI, history is littered with technologies that didn’t end up panning out, but it’s the ones that do that cause shifts. AI is such a technology in my eyes.


  • I didn’t say AI would solve that, but I’ll re-iterate the point I’m making differently:

    1. Spreading awareness of how AI operates, what it does, what it doesn’t, what it’s good at, what it’s bad at, how it’s changing, (Such as knowing there are hundreds if not thousands of regularly used AI models out there, some owned by corporations, others open source, and even others somewhere in between), reduces misconceptions and makes people more skeptical when they see material that might have been AI generated or AI assisted being passed off as real. This is especially important to teach during transition periods such as now when AI material is still more easily distinguishable from real material.

    _

    1. People creating a hostile environment where AI isn’t allowed to be discussed, analyzed, or used in ethical and good faith manners, make it more likely some people who desperately need to be aware of #1 stay ignorant. They will just see AI as a boogeyman, failing to realize that eg. AI slop isn’t the only type of material that AI can produce. This makes them more susceptible to seeing something made by AI and believing or misjudging the reality of the material.

    _

    1. Corporations, and those without the incentive to use AI ethically, will not be bothered by #2, and will even rejoice people aren’t spending time on #1. It will make it easier for them to claw AI technology for themselves through obscurity, legislation, and walled gardens, and the less knowledge there is in the general population, the more easily it can be used to influence people. Propaganda works, and the propagandist is always looking for technology that allows them to reach more people, and ill informed people are easier to manipulate.

    _

    1. And lastly, we must reward those that try to achieve #1 and avoid #2, while punishing those in #3. We must reward those that use the technology as ethically and responsibly as possible, as any prospect of completely ridding the world of AI are just futile at this point, and a lot of care will be needed to avoid the pitfalls where #3 will gain the upper hand.



  • This is the inevitable end game of some groups of people trying to make AI usage taboo using anger and intimidation without room for reasonable disagreement. The ones devoid of morals and ethics will use it to their hearts content and would never interact with your objections anyways, and when the general public is ignorant of what it is and what it can really do, people get taken advantage off.

    Support open source and ethical usage of AI, where artists, creatives, and those with good intentions are not caught in your legitimate grievances with corporate greed, totalitarians, and the like. We can’t reasonably make it go away, but we can reduce harmful use of it.


  • While there are spaces that are luckily still looking at it neutrally and objectively, there are definitely leftist spaces where AI hatred has snuck in, even to a reality-denying degree where lies about what AI is or isn’t has taken hold, and where providing facts to refute such things are rejected and met with hate and shunning purely because it goes against the norm.

    And I can’t help but agree that they are being played so that the only AI technology that will eventually be feasible will not be open source, and in control of the very companies left learning folks have dislike or hatred for.








  • Outside of the marketing labels of “artificial intelligence” and “machine learning”, it’s nothing like real intelligence or learning at all.

    Generative AI uses artificial neural networks, which are based on how we understand brains to connect information (Biological neural networks). You’re right that they have no self generated input like humans do, but their sense of making connections between information is very similar to that of humans. It doesn’t really matter that they don’t have their own experiences, because they are not trying to be humans, they are trying to be as flexible of a ‘mind’ as possible.

    Are you an artist or a creative person?

    I see anti-AI people say this stuff all the time too. Because it’s a convenient excuse to disregard an opposing opinion as ‘doesn’t know art’, failing to realize or respect that most people have some kind of creative spark and outlet. And I know it wasn’t aimed at me, but before you think I’m dodging the question, I’m a creative working professionally with artists and designers.

    Professional creative people and artists use AI too. A lot. Probably more than laypeople, because to use it well and combine it with other interesting ideas, requires a creative and inventive mind. There’s a reason AI is making it’s way all over media, into movies, into games, into books. And I don’t mean as AI slop, but well-implemented, guided AI usage.

    I could ask you as well if you’ve ever studied programming, or studied psychology, as those things would all make you more able to understand the similarities between artificial neural networks and biological neural networks. But I don’t need a box to disregard you, the substance of your argument fails to convince me.

    At the end of the day, it does matter that humans have their own experiences to mix in. But AI can also store much, much more influences than a human brain can. That effectively means for everything it makes, there is less of a specific source in there from specific artists.

    For example, the potential market effects of generating an automated system which uses people’s artwork to directly compete against them.

    Fair use considerations do not apply to works that are so substantially different from any influence, only when copyrighted material is directly re-used. If you read Harry Potter and write your own novel about wizards, you do not have to credit nor pay royalties to JK Rowling, so long as it isn’t substantially similar. Without any additional laws prohibiting such, AI is no different. To sue someone over fair use, you typically do have to prove that it infringes on your work, and so far there have not been any successful cases with that argument.

    Most negative externalities from AI come from capitalism: Greedy bosses thinking they can replace true human talent with a machine, plagiarists that use it as a convenient tool to harass specific artists, scammers that use it to scam people. But around that exists an entire ecosystem of people just using it for what it should be used for: More and more creativity.


  • You picked the wrong thread for a nuanced question on a controversial topic.

    But it seems the UK indeed has laws for this already if the article is to believed, as they don’t currently allow AI companies to train on copyrighted material (As per the article). As far as I know, in some other jurisdictions, a normal person would absolutely be allowed to pull a bunch of publicly available information, learn from it, and decide to make something new based on objective information that can be found within. And generally, that’s the rationale AI companies used as well, seeing as there have been landmark cases ruled in the past to not be copyright infringement with wide acceptance for computers analyzing copyrighted information, such as against Google, for indexing copyrighted material in their search results. But perhaps an adjacent ruling was never accepted in the UK (which does seem strange, as Google does operate there). But laws are messy, and perhaps there is an exception somewhere, and I’m certainly not an expert on UK law.

    But people sadly don’t really come into this thread to discuss the actual details, they just see a headline that invokes a feeling of “AI Bad”, and so you coming in here with a reasonable question makes you a target. I wholly expect to be downvoted as well.



  • Never assumed you did :), but yes, as little assumptions is the best. But as you can already tell, it’s hard to communicate when you take no assumptions when people make explicit statements crafted to dispel assumptions, that are entirely plausible for a hypothetical real person to have.

    In fact, your original statement of “They have no doubts. Never occurred to them it might be a joke…”, is in itself a pretty big assumption. Unless, of course. I assume that statement to be a hyperbole, or even satire. But if we want to have fun talking about a shitpost we do kind of have to decide on an assumptive position on the meme that can’t talk back.


  • People making assumptions is the issue.

    There’s assumptions involved in detecting satire from just text as well. You would just have a Reverse Poe’s law where “any extreme views can be mistaken by some readers for satire of those views without clear indicator of the author’s intent”.

    Normally when people say or type things we (justifiably) assume that to be what they mean, which is why satire works much better when spoken because intonation can make the satire explicit without changing the words or saying it out loud.


  • As with most things, if you are competent, a degree doesn’t really matter. The degree is just a shortcut, and even if it’s checked it’s no guarantee you are otherwise competent. You’re expected to have picked up competency during the time you got your degree.

    So this probably works if you are otherwise competent, but if you’re not it’s just going to lead to increased scrutiny (Because hey, you should know these things) and if someone does end up checking up on you it’s a great way to get fired with cause. Depending on how tight knit your industry is that can still make things very hard for you.

    And of course, once this becomes frequent enough, you’d be surprised how quickly checking will become the norm again.