Fruitbat [she/her]

She/her

  • 1 Post
  • 23 Comments
Joined 2 years ago
cake
Cake day: June 3rd, 2023

help-circle


  • What do dengist have to do with anything in this context?

    I don’t think it’s correct to say China has done nothing in regards to Palestine. For example, there was the 2024 Beijing Declaration last year. To quote form said linked article

    Leaders representing 14 Palestinian factions and forces—including Hamas, Fatah, the Popular Front for the Liberation of Palestine (PFLP), the Democratic Front for the Liberation of Palestine (DFLP), the Palestinian People’s Party, and the Palestinian Islamic Jihad (PIJ), and others—have resolved to form a national unity government after three days of discussion in Beijing mediated by China’s foreign minister, Wang Yi.

    The agreement, called the “Beijing Declaration to End the Division and Strengthen Palestinian National Unity,” is being hailed as a significant step towards unity and overcoming the decades-long division among the main political groups in Palestine. United Nations Secretary-General Antonio Guterres welcomed the Beijing Declaration shortly after its signing and expressed his appreciation for the diplomatic efforts being made by China in facilitating the process.

    Outside of that, from reading those threads above, one of the comments someone mentioned how Hamas used a weapon from China like this https://en.wikipedia.org/wiki/HJ-8#Gaza_Strip

    Another thing to is it’s more besides that, like the Zijiang M99

    In Sheikh Aajlin, the Brigades introduced a new sniper rifle for the first time, the M99 anti-material rifle. In a statement, the faction announced that it shot down an Israeli occupation soldier in Sheikh Aajlin with an M99 sniper rifle. It is still unclear whether the rifle is the Chinese-made Zijiang M99 or the Barrett-made M99, however, both rifles have anti-material and long-range capabilities.

    It’s really doubtful it’s the barret made M99. To add, from looking around there are two tweets, here and here showing the Zijang M99, and then there a news source talking about it’s use here To add that news source mentions this

    The observation worth analyzing in this context is that, coinciding with the “Qassam Brigades” announcement of the use of Chinese weapons in its war against Israel,

    We can notice as well that during its war against Israel after the “Al-Aqsa Flood” operation on October 7, 2023, the “Al-Qassam Brigades” announced that its fighters were able to snipe an Israeli soldier with a heavy-caliber “M99” sniper weapon in the “Sheikh Ajlin area” in Gaza City. This is the first announcement from the Al-Qassam Brigades about the use of this deadly Chinese weapon in the Gaza Strip.

    Only issue is I’m having trouble finding where that announcement was said? Either way It is clear that they are using weapons from China at the very least.

    Do note, I do think China should stop trade with “Israel” but I want to address this. Mainly to go to trade with what China sells from “Israel” where “Israel” buys from China primarily consists of

    China primarily exported electric vehicles, mobile phones, computers and metals.

    Compared to the United States that

    The United States sold Israel explosive munitions, diamonds, electronics and chemical products. Israel receives billions in US military aid, much of which is spent on American-made weapons, effectively boosting US exports.

    https://www.aljazeera.com/news/2025/5/22/which-countries-trade-the-most-with-israel-and-what-do-they-buy-and-sell

    Either way it’s the United States that trades the most with “Israel” and props it up. Again, this isn’t to say China should continuing trade, it would be nice if they didn’t, but it would not change things too much even if China did stop trade. Since again, “Israel” is propped up by imperial core countries.

    Either way it isn’t correct to say China has done none nothing. It would be nice if China did more, sure I can agree with that. However ultimately, the responsibility lies with the people in the global north to do something against their governments to stop contributing to/participating in genocide. Since to go to the United States for example, how many genocides has the United States been involved since it’s conception like against the various First Nations, to around the world like in Latin America? For the United States, genocide won’t end just with Palestine, there will be more in the future if those in the United States don’t decolonize and overthrow the U.S.

    Also it is rather chauvinistic and idealistic to paint China as the world’s savior as well, since China will not be the world savior not, and won’t ever be. That just always been idealistic nonsense. To add, it is also chauvinistic to hold China responsible for the U.S-Israel genocide against Palestine. Why should China, and the world, mainly the global south at large, be held responsible for the horrible acts that the global north like the United States commits and not the global north and people within those countries held responsible for those acts instead?

    Again that isn’t to say China couldn’t do more, they could, but the fault lies within the global north and for those within to do build and make revolution and to stop things like this.






  • I’m going to feel sorry considering that generally people using it are those usually not having anyone else to talk to or feel like they can’t, and they shouldn’t be punished by cops coming in and coming to kill them since some people reviewing a flagged conversation decided it was worth calling the pigs. Just because they used ChatGPT to talk about something, shouldn’t mean they should be punished for that.

    I think it is worth looking into as to why someone would go to talk to ChatGPT about issues. In which, If anything, this just shows a heightening contradiction in a lot of societies where mental health is just not really taken seriously or punished, plus a few other factors like alienation. In a way it sort of reminds me of how people say to reach out if your struggling, but if you start talking about suicidal thoughts, it just tends to be “go away” or “go talk to a therapist” who, much like openai, will call the police to depending on how specific you are with said suicidal thoughts or if it thoughts of harming others. Mainly since lots of people don’t know how to handle such things or don’t take it seriously or belittle someone going through a mental health crises. Least not to mention to go into other things, assuming someone even can afford to go see a therapist or mental health professional.


  • It’s not too surprising that ChatGPT will resort to flagging conversations to people to decide whether to call police or not, especially considering when holding places like OpenAI accountable for things like suicide, which seems to be somewhat a response to that or other mental health episodes. It is worth noting is less the llm, and more of a human team doing that. I think it is worth pointing out that since a lot of other online platforms are also like this to? Like if I posted very specific information about harming myself or others to some other place, it definitely will be reviewed by people and may or may not be reported to police.

    More reasons to use other things like DeepSeek or even better, local llms for privacy. Especially since OpenAI can get fucked for lots of reasons.

    Last month, the company’s CEO Sam Altman admitted during an appearance on a podcast that using ChatGPT as a therapist or attorney doesn’t confer the same confidentiality that talking to a flesh-and-blood professional would — and that thanks to the NYT lawsuit, the company may be forced to turn those chats over to courts.

    Also this is funny! Therapists will do the exact same thing if you come in with specific plans on hurting yourself or others, they will call the police. They have to. Speaking from personal experience of having a psychiatrist call the police on me. It would be nice if mental health systems could become detached from the carceral system. Since as it stands, with it being linked to the carceral system like it is now, it just makes it linked to a form of social control and oppression that punishes someone for being in a mental health crises.





  • Least not to mention how the American Psychology Association for example, contributed to the CI.A torture program in general and like in Abu Ghraib. Then to go to something else, how psychiatry was used in colonization for example, to quote what Fanon said here

    […]Since 1954 we have drawn the attention of French and international psychiatrists in scientific works to the difficulty of “curing” a colonized subject correctly, in other words making him thoroughly fit into a social environment of the colonial type.




  • China is literaly Israel`s biggest trading partner as of end 2024.

    Nonsense and misleading. China isn’t “Israel” largest trading partner. To go to your source, the United States is, with Ireland at second, which to quote “The biggest importers of Israeli products were the United States with $17.3bn, Ireland with $3.2bn and China with $2.8bn.” Also important to note how the U.S is there at 17.3bn compared to China 2.8bn.

    If your saying in terms of what “Israel” buys from other countries, overlooking that in this, lots of countries buy from China. Then lets go into specifics. Where “Israel” buys from China is “China primarily exported electric vehicles, mobile phones, computers and metals.” Compared to the United States where “Israel” buys “The United States sold Israel explosive munitions, diamonds, electronics and chemical products. Israel receives billions in US military aid, much of which is spent on American-made weapons, effectively boosting US exports.”

    Which is a world of difference. To add, it feels silly linking that one article from 2018 about questioning if China abandoning Palestine, considering the Beijing Declaration from last year. Perhaps it would be more better to be more focused on the United States that uses “Israel” as a forwarding base? Even if China stopped trading, It’s not going to change much from how much support “Israel” gets from the United States and other global north countries that continue to prop it up. Since the issue isn’t with China, but the United States.


  • Mostly, the pith of what I wrote, which has little to do with value judgement, quality of diagnosis, or even patient outcomes, and more to do with the similarity between the neurological effects on the practitioners associated with using descriminative models to do object detection or image segmentation in endoscopy and those of using generative models to accomplish other tasks

    You claimed they had nothing to do with each other. I disagree and stated one way in which they are similar: both involve the practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it and maybe doing something else with their attention (maybe not, only the practitioner can know what else they might do).

    But were specifically talking about discriminative models and less to do with generative models in this instance. They aren’t using generative models to help with this. If you want to talk about generative models with other tasks that actually use generative models, go for it, but it has nothing to do with CADe since CADe doesn’t use generative models, but I do understand that your trying to connect these two with how you said they’re “forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model”

    In which this has less to do with AI but with tools in general does it not, since tools can also cause ‘neurological effects’? Lots of other things can fall into this, especially if we leave out the machine learning portion. Like, health professionals tend to use pulse oximeters to also take heart rate besides oxygen levels, while they’re busy doing something else like asking questions or whatever, before looking at it and determining whether that seems right or wrong before noting it down. But they are also using a tool that forfeits their attention to solve a problem, because to go to heart rate, they could just easily take it without it but usually they don’t, not unless if something warrants further investigation. Except obviously, pulse oximeters aren’t generative or discriminative AI, mainly just giving an example where attention is “forfeited”.

    Either way, it’s just seems disingenuous. Since this is just focusing on a superficial similarity with “practitioner forfeiting the deliberate guidance of their attention to solve a problem in favor having a machine learning model do it” or equating the two as if the two are the same. Both have a distinctive purpose. Sure they’re the same if were focusing on the overall form or general appearance of these tools since they are both resultant from machine learning, then sure they are the same, but on their content or essence, there is clear difference in what they’re built to do. They function different and do different things, otherwise there would be no such distinctions between discriminative and generative, and only does that happen if we go by that superficial similarity like taking attention away, which also shares other things in common with other tools as well that isn’t just discriminative or generative ai.

    I didn’t mean that a human is necessarily no longer doing anything with their attention. Specifically, when a human uses a machine learning model to solve some problem (e.g., which region of an image to look at during a colonoscopy), this changes what happens in their mind. They may still do that function themselves, compare their own ideas of where to look in the image with the model’s output and evaluate both regions, or everywhere near both regions, or they might do absolutely nothing beyond looking solely in the region(s) output by their model. We don’t know and this is totally immaterial to my claim, which is that any outsourcing of the calculation of that function alters what happens in the mind of the practitioner. It’s probable that there are methodologies that generally enhance performance and protect practitioners from the so-called deskilling. However, merely changing the function performed by the model in question from generative to discriminative does not necessarily mean it will be used in way that avoids eroding the user’s competence.

    I have to ask but what “both regions”? This is all happening on a live camera feed, there is only one region, that of the camera feed or rather, the person’s insides that they’re investigating, is the only region. If CADe sees something it thinks it’s a polyp and highlights it on the camera feed for the health professional to look further, there isn’t any other region. CADe isn’t producing a different model of someones insides and putting that on the camera feed. However It is producing a rectangle just to highlight something, but it still all falls within the region of the camera feed of someones insides. Either way a health professional still has to look further inside someone and investigate.

    I can see the argument of like, someone performing a colonscopy with CADe, mainly relying on this program to highlight things while just passively looking around, but that is rather speaking of the individual than the AI and that amounts more to negligence. Since another thing to note is that there are also false positives as well, so even if someone just relying on it to highlight something, they still have to investigate further instead of just taking it at face value. Which still requires competency like to determine if that mass is a polyp or something normal. However another thing but like, nothing stopping a health professional without CADe being negligent as well. Like missing more subtle polyps because they didn’t immediately recognized it since it’s not what they’re used to seeing and moving on, or alternatively, at first glance seeing what they think is a polyp, but then it turns out it not after further investigation, or if their being neglect, deciding it just needs to go without further investigation despite it not being a polyp but calling a polyp at the end of the day.

    Either way, I don’t really see what CADe changes much besides just trying to get health professionals to further investigate something. The only thing that I can see this being an issue in regards to CADe, is if they couldn’t access the service due to technical issues or other reasons, then yes, deskilling in this instance would be a problem, but on other hand they could just reschedule if it’s a tech issue or just wait until it’s back up, but that still sucks for the patient considering the preparation for such exams. However that’s only an issue if over reliance is an issue, which I can also see maybe over reliance being a problem to an extent, but at the same time, CADe makes health professionals investigate further since it not solving anything, and just because they use it doesn’t mean their competency is all thrown out the window.

    Besides those things can likely be addressed in different ways to lower deskilling while still using CADe. Since CADe in general does seem to be helpful to an extent, since it’s just another tool in the tool box, and a lot of this criticisms like with attention or deskilling, just falls outside of AI and into more general stuff that isn’t unique to AI, but tools and technology in general.


  • How so? How is it related to generative AI then? It’s not generating images or text. Sure it trained like any other machine learning stuff, but from various videos on computer aided polyp detection, it just literally putting green rectangle box over what it thinks it’s a polyp for said health professionals looking to investigate or check out. Looking up on other videos on computer aided polyp detection also just show this to. It just literal image recognition and that it. So genuinely, how is that generative “ai” or related to? What images is it generating? Phones and camera also have this function with using image recognition to detect faces and such. Is there something wrong with image recognition? I’m genuinely confused here.

    in both cases, a human is deliberately forfeiting their opportunity to exercise their attention to solve some problem in favor of pressing a “machine learning will do it” button

    That is such a bad framing of this, especially if you willing overlook how from even this news article and the study, mentions how CADe has helped detection numbers

    AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

    and to go that one comments from Omar that the news article linked

    A recently published meta-analysis of 44 RCTs suggested an absolute increase of 8% in the adenoma detection rate (ADR) with CADe-assisted colonoscopy

    To add, just watch that video linked or find other videos of computer aided polyp detection, and tell me how a machine is solving the problem? Or this video that covers this more in depth. https://www.youtube.com/watch?v=n9Gd8wK04k0 Titled “Artificial Intelligence for Polyp Detection During Colonoscopy: Where We Are and Where We Are He…” from SAGES

    But again, the health professional is looking at a monitor with a camera with a live feed, and all said technology is doing is highlighting things in a green rectangular box for health professionals to investigate further. How is the machine taking their attention away when it is rather telling health professionals “hey might want to take a closer look at this!” and bring their attention to investigate further on something? Seriously I don’t understand. How is that bad?

    Another thing is just because a health professional is putting their attention on something, doesn’t mean they won’t miss something, and like again, I don’t see what wrong with CADe if it highlights something said health professional might of missed, especially since again, all it is doing is highlighting things on a monitor with camera feed, for a health professional to just further check out.

    What am I missing here?


  • I think it does since to go to this AI in question, it is just simply image recognition software. I don’t exactly see how it affects cognitive abilities. I can see how it can affect skills, sure, and perhaps there could be an over reliance on it! but for cognitive abilities themselves, I don’t see it. Something else to, it’s important to note that this is in reference to non AI assisted, and it doesn’t necessarily mean it’s bad. Like to go to the news article under this all.

    AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday.

    and to go back to that one comments from Omar article

    A recently published meta-analysis of 44 RCTs suggested an absolute increase of 8% in the adenoma detection rate (ADR) with CADe-assisted colonoscopy

    It could be argued that AI helped more. However I think a few better questions is, if AI is helping health professional detect things more, what is the advantage of going back to non ai assisted then? Why should non ai assisted be preferred if ai assisted is helping more? Is this really a problem, and what could help if this is a problem? I think it is clear that it does help to an extent, so just getting rid of it doesn’t seem like a solution, but I don’t know! I’m not a health professional who works with this stuff or is involved in this work.

    There is this video that covers more about CADe here https://www.youtube.com/watch?v=n9Gd8wK04k0 titled “Artificial Intelligence for Polyp Detection During Colonoscopy: Where We Are and Where We Are He…” from SAGES

    I just genuinely don’t see what is wrong with CADe especially if it helps health professionals catches things that they might of missed to begin with, and like again CADe is simply just highlight things for health professionals to investigate further, how is there something wrong with that?

    To add, just because something being mentally outsourced doesn’t necessarily mean that’s bad. I don’t think google maps killed people ability to navigate, it just simply made it easier no? Should we just go back to compass and maps? Or even further, just go by navigation by the stars and bust out our sextants? Besides, mentally offloading can be good and allowing us to free up to do more, it just depends on what or what the end goal is? I don’t necessarily see what is wrong with mentally offloading things.

    I also don’t understand your example with getting run over? I wouldn’t want to get hit by any vehicles, since both can kill or cause life long injuries.

    I’m not going to go into those other articles much since it’s veering into another topic, but I do understand LLM have a tendency to cause people to become over reliant on it or take it at face value, but I don’t agree with any notion that it’s doing things, like making things worse or causing something as serious as cognitive impairment since that is a very big claim and like millions and millions of people are using this stuff. I do think however, there should be more public education on these things like using LLM right and not taking everything it generates at face value. To add, with a lot of those studies I would be interested to what studies are coming out of China to, since they also have this stuff to

    What? I was complaining to everyone around me that I felt like my brain had been deep fried after a bout of COVID. I legitimately don’t understand this perspective.

    Somehow I was suppose to get that from this?

    Chronic AI use has functionally the same effects as deep frying your brain

    That’s a bit unfair to assume that I’m somehow suppose to get that just based off that single sentence, because what you said a lot different than the other, and with that added context, forgive me! there nothing wrong with that! with that added context.

    cw: ableism

    It just, I really don’t like it when criticism of AI stem into pretty much people saying others are getting “stupid”, since besides the ableism there, millions of people use this stuff, and it also just reeks of like how to word this. “Everyone around me is a sheep and I’m the only enlighten one” kind of stuff, especially since people aren’t stupid, nor are the people who use any of this stuff either, and I just dislike this framing especially when it framed as this stuff causing “brain damage”, when it’s not and your comment without that added context felt like it was saying that.


  • Except it doesn’t, no more than using a computer “deep fries” someone brain, and that just feels like a gross way to frame this stuff to in terms of “deep frying” which just feels dehumanizing. Besides that, to go to this study, it’s mainly seems to be referencing something like image recognition, and the AI in question has less to do with generative ai or llms. Especially since the study don’t even mention llms.

    and to go back to that news article, they reference this comment by Omer Ahmad near the bottom
    https://info.thelancet.com/hubfs/Press embargo/AIdeskillingCMT.pdf

    Computer-aided polyp detection (CADe) in colonoscopy represents one of the most extensively evaluated uses of AI in medicine, demonstrating clinical efficacy in multiple randomised controlled trials (RCTs).”

    using this different article https://www.cancer.gov/news-events/cancer-currents-blog/2023/colonoscopy-cad-artificial-intelligence

    These systems are based on software that, as the colonoscope snakes through the colon, scans the tissue lining it. The CAD software is “trained” on millions of images from colonoscopies, allowing it to potentially recognize concerning changes that might be missed by the human eye. If its algorithm detects tissue, such as a polyp, that it deems suspicious, it lights the area up on a computer screen and makes a sound to alert the colonoscopy team.

    so how exactly is this causing “deep frying”?