DES MOINES, IOWA - MARCH 10: Iowa Governor Kim Reynolds and Florida Governor Ron DeSantis speak at a book tour on March 10, 2023 in Des Moines.  (Photo by Rachel Mummy for The Washington Post via Getty Images)

Iowa Governor Kim Reynolds and Florida Governor Ron DeSantis speak at a book tour on March 10, 2023 in Des Moines.

Photo: Rachel Mummy for The Washington Post via Getty Images

reads like A headline pulled from a dystopian near future: artificial intelligence is being used to ban books by Toni Morrison, Alice Walker and Maya Angelou in schools. to comply with the recently enacted state legislation That censors school libraries, Iowa’s Mason City Community School District used ChatGPT to scan a selection of books and flag them for “descriptions or visual depictions of sexual acts.” nineteen books – which includes Morrison’s “Beloved”, Margaret Atwood’s “The Handmaid’s Tale” and Khaled Hosseini’s “The Kite Runner” – will be removed from school library collections before the start of the school year.

This interweaving of generic AI and republican authoritarianism is really troubling. However, this is not a forecast of a future ruled by sensorized machines. These today are trivial acts of reactionary social control and bureaucratic appeasement. Infallible algorithmic systems have long been used to draw up plans for power structures deploying them.

AI is not banning books. Are Republican. the law that the school district is complying with, Put signature on by Iowa Gov. Kim Reynolds in May, another piece of astroturfed right-wing legislation aimed at away to gender non-conformity, anti-racism and basic fertility education from schools, while reinforcing the power of the conservative family unit.

Bridget Axman, assistant superintendent of curriculum and instruction at the Mason City Community School District, said in a statement that the AI ​​will not replace the district’s standard book restriction methods. Axman said, “We will continue to rely on our long-established process that allows parents to reconsider books.”

At most, ChatGPT’s application here is an example of an already common problem: the use of existing technologies to give the appearance of neutrality to political actions. It is well established that predictive policing algorithms tend to replicate the same racist patterns of criminalization as the data they are trained on – they are taught to treat as potential criminals those demographics that the police have already determined to be criminals.

In Iowa’s book ban, the algorithmic tool – a large language model, or LLM – followed a simple prompt. It was not processed for reference. The situation in which a school district is considering banning texts detailing sexual acts has already shaped the outcome.

Iowa newspaper as The Gazette informed of, the school district compiled a long list of “generally challenging” books to feed the AI ​​program. These are the books that radical Republicans have taken over school boards and major state legislatures and have already demanded to ban them. It’s no surprise, then, that the algorithmic selection included books dealing with white supremacy, slavery, sexual oppression, and sexual autonomy.

further comments Axman reveals more about The Authority’s operations, which have nothing to do with the powerful AI’s control. same as told Popular Science, “Frankly, we have more important things to do than waste a lot of time trying to figure out how to protect kids from books. Also, we have a legal and ethical obligation to comply with the law. Our goal here is really a defensive process.”

Focusing on concerns about generative AI as a potentially all-powerful force ultimately serves the interests of Silicon Valley.

Both casually dismissing Republican legislation yet willing to scramble with technical shortcuts to appear in quick compliance, Axman’s approach reflects both cowardice and collusion on the part of the school district. Surely, protecting students from the rich diversity of books rather than protecting their access should be what school systems do with their time. But the myth of algorithmic neutrality makes the book selection, in Axman’s words, “defensible” to both right-wing promoters and critics of his pathetic legislation.

Using ChatGPT in this case can lead to technical destruction. Yet focusing on concerns about generative AI as a potentially all-powerful force ultimately serves the interests of Silicon Valley. Both worries about AI security and the dreams of AI power fuel companies like OpenAI, the developer of ChatGPT. millions Dollars being spent on researching AI reportedly pose an existential risk to humanity. As critics such as Edward Ongweso Jr. toldSuch narratives, either fearfully or hopefully, look to a future of AI omnipotence, while current AI tools, though routinely shoddy and inaccurate, are already harming workers and aiding harmful state functions. Doing it, let’s ignore it.

“From management devaluing labor to reactionaries censoring books, ‘AI’ doesn’t have to be intelligent, work, or even exist,” wrote Patrick Blanchfield of the Brooklyn Institute for Social Research on Twitter. “Its real function is simply to mystify/automate/justify what the powerful have always been doing and were always going to do.”

To underscore Blanchfield’s point, the ChatGPT book selection process was found to be unreliable and inconsistent when Repeated by Popular Science. Popular Science journalists stated, “Regarding ‘The Kite Runner,’ for example, repeated inquiries give contradictory answers.” In a response, ChatGPT believes that Khaled Hosseini’s novel has ‘little or no explicit sexual content.’ After a separate follow-up, the LL.M. confirmed that the book “depicts sexual harassment.”

Yet accuracy and reliability were not the issue here, more so than the “safety” of children is the issue of the Republican book ban. The myth of AI efficiency and neutrality, like the lie of protecting children, simply provides, as the assistant superintendent himself put it, a “defensive process” for fascist pretensions.




Leave a Reply

Your email address will not be published. Required fields are marked *