"Readocracy is on a mission to bring integrity back to the internet." Get credit for what you read online. Prove you are well-informed on any subject.

You’ve been invited to join
🔑

You're one of the first people joining the hub.

You’ll also gain access to ’s Collections, Discussions, Library, and more — and also get recognized for the helpful content you share.

This hub’s main subjects are:

This hub’s personal interest subjects (shared casual interests) are:

Reading you do on these subjects that you save publicly, will be shared with this hub. To edit this / view advanced filtering, click here.

To finish joining , you must complete all the onboarding steps. If you exit now, your invitation will not yet be accepted.

You’ve been invited to join 🔑

Now your expertise and interests can be recognized, leveraged, and members with similar interests can find you within . You’ll also gain access to ’s Collections, Discussions, Library, and more.

Let’s make this work in a way you’re 100% comfortable with:

The fun part: some custom questions chosen by .
(Answers are public.)

Don’t overthink it, you can edit these later.

Thanks, , you can now join!

These are ’s official subjects.
First, choose which are relevant to you:

Professional Subjects
(career-related)

Personal interest Subjects
(hobbies, misc interests, etc.)

Great. Now edit the below. These are the subjects which others will see are associated with you within . To remove any, just click them.

Professional Subjects
(career-related)

Personal interest Subjects
(hobbies, misc interests, etc.)

’s official subjects are below.
First, choose which are relevant to you:

Professional Subjects
(career-related)

Personal interest Subjects
(hobbies, misc interests, etc.)

Great. Now edit the below. These are the subjects which others will see are associated with you within .

Currently, none are selected. Please click on the ones you’re comfortable being associated with you.

Professional Subjects
(career-related)

Personal interest Subjects
(hobbies, misc interests, etc.)

New hub tooltip icon For bringing together a company, team, class, or community under one hub. Where you can better aggregate, organize, and keep up with each other’s learning and expertise (in a way that is privacy-first/every member controls), and, optionally, selectively showcase it externally.

We found the following similar hubs:

You can join any of them by clicking the contact button.

Choose at least 54 subjects that this team is passionate about. At least 1 must be a Personal subject Personal subjects are subjects that are not career-related. E.g. your hub might have many people who are into the NBA, or Food & Drink, or Travel, etc.  and at least 43 Professional subjects. You can customize subjects later, too.

            Personal subjects are subjects that are not career-related. E.g. your hub might have many people who are into the NBA, or Food & Drink, or Travel, etc.

            Add another

            Does your team / community have any functional groupings?
            E.g. Engineering, Marketing, Accounting, etc.E.g. Host, Speaker, Subscriber, etc.

            tooltip icon When people join, this option lets you or them choose which grouping they associate with. This then allows everyone to filter more efficiently in your dashboard.

            Add another

            Choose an avatar for your Hub

            Loading...

            Verified by award-winning technology. play_arrow See how and why this credential matters more than most, in under 90 seconds. launch

            Aug 26, 2025

            Authentic Certificate

            Earned by Graham Blair
            for A.I.'s Risks & Limitations

            338 credits earned across

            35 articles, 5 videos

            Facilitated by

            Content by / Recommendations from

            Tristan Harris

            Co-Founder
            @ Center for Humane Technology

            Sam Altman

            CEO
            @ OpenAI

            Dario Amodei

            CEO
            @ Anthropic

            Joseph Stiglitz

            Nobel Prize-Winning Economist
            @ Columbia University

            Rob Horning

            Editor
            @ Real Life & New Inquiry

            Ezra Klein

            Co-Founder
            @ Vox

            Justine Calma

            Senior Science Reporter
            @ The Verge

            Stephen Ornes

            Author
            @ Math Art: Truth, Beauty, and Equations book

            Clive Thompson

            Book Author
            @ Coders

            +74 more hover to view

            Michael Wooldridge

            Director of Foundational AI Research
            @ The Alan Turing Institute

            Aza Raskin

            Co-Founder
            @ Center for Humane Technology

            Joe Slater

            Lecturer in Moral Philosophy
            @ University of Glasgow

            Victor Tangermann

            Senior Editor
            @ Futurism

            John Cassidy

            Staff writer
            @ The New Yorker

            Audra Simons

            Senior Director of Global Products
            @ Everfox

            James Humphries

            Lecturer
            @ University of Glasgow

            Gary Smith

            Economics Professor
            @ Pomona College, California

            Robert J Marks II

            Distinguished Professor
            @ Baylor University

            Vibhas Ratanjee

            Senior Practice Expert, Executive Advisor
            @ Gallup

            Ken Royal

            Senior Partner
            @ Gallup

            Mind Matters

            James Bridle

            Author
            @ Verso Books

            Jeanine Berg

            Senior Economist
            @ International Labor Organization

            Yuval Noah Harari

            Historian, philosopher, author
            @ Nexus & other Best-sellers

            Chris Stokel-Walker

            Journalist, author, presenter and lecturer
            @

            Jesse Damiani

            Emerging Tech Reporter
            @ Forbes

            CGPGrey

            Armen Panossian

            Co-CEO
            @ Oaktree Capital

            Ed Newton-Rex

            Visiting Scholar
            @ Stanford University

            Naomi Klein

            Best-Selling Author
            @ Various

            Abi Olvera

            Editorial fellow, Disruptive technologies
            @ Bulletin of the Atomic Scientists

            Kyle Hill

            Science educator and entertainer

            OECD

            IBM

            Max Tegmark

            Professor
            @ Massachusetts Institute of Technology

            Adam Frank

            Professor of Astrophysics
            @ University of Rochester

            Nesrine Malik

            Journalist
            @ The Guardian

            Joshua Gans

            Professor
            @ University of Toronto, Rotman School of Management

            Solcyre Burga

            Reporter
            @ Time

            Ben Lutkevich

            Editor
            @ TechTarget

            Ashley Belanger

            Senior Policy Reporter
            @ Ars Technica

            Benj Edwards

            Senior AI Reporter
            @ Ars Technica

            Jesse Dodge

            Senior Research Scientist
            @ Allen Institute for AI

            Michael Townsen Hicks

            Lecturer
            @ University of Glasgow

            Jeffrey Ladish

            Executive Director
            @ Palisade Research

            Jurgen Gravestein

            Product Design Lead
            @ Conversation Design Institute

            Scotia Wealth Management

            Ameneh Dehshiri

            AI & Digital Rights Lawyer
            @ Various

            Dan Milmo

            Global Technology Editor
            @ The Guardian

            Chris Hood

            Author
            @ Infallible

            Evan Hubinger

            Head of Alignment Stress Testing
            @ Anthropic

            Erik Hoel

            Author
            @ Various

            Larry Ellison

            Co-Founder
            @ Oracle

            Nature Machine Intelligence

            National Cybersecurity Alliance

            Alex Mahadevan

            Director
            @ MediaWise

            Andrew J. Peterson

            Assistant Professor
            @ University of Poitiers

            Yuval Rymon

            Researcher
            @ Tel-Aviv University

            Laurie Segall

            Founder
            @ Mostly Human Media

            KPMG

            Amina Allison

            Program Manager and Data Protection Officer
            @ Cybersecurity Education Initiative (CYSED)

            David Duvenaud

            Chair in Technology and Society
            @ Schwartz Reisman Institute at UofT

            Ada Lovelace Institute

            Ted Chiang

            Author
            @ Science Fiction

            Upwork Research Institute

            Pranshu Verma

            AI Technology Reporter
            @ The Washington Post

            Jamie Bernardi

            AI Policy Researcher
            @ Various

            Emily Bender

            Professor of Artificial intelligence, Data science
            @ Department of Linguistics at University of Washington

            Alex Hanna

            Director of Research
            @ Distributed AI Research Institute (DAIR)

            Mozilla Foundation

            Gallup

            Walter Bradley Center for Natural and Artificial Intelligence

            Microsoft

            Steve Lockey

            Senior Research Fellow
            @ Melbourne Business School

            IMF

            Kapa.ai

            Adam Bales

            Assistant Director and Senior Research Fellow (Philosophy)
            @ Global Priorities Institute at University of Oxford

            Association for the Advancement of Artificial Intelligence

            Gordon Gibson

            Director, Applied Machine Learning
            @ Ada

            Ada

            Jamie Bernardi

            AI Policy Research Fellow
            @ Various

            Nicole Gillespie

            Chair of Trust
            @ Melbourne Business School at the University of Melbourne

            Palisade Research

            See Graham’s learning for yourself.

            This is Graham’s Verified Learning Transcript.

            Content most deeply engaged with

            Political theorists sometimes talk about the “resource curse”, where countries with abundant natural resources end up more autocratic and corrupt – Saudi Arabia and the Democratic Republic of the Congo are good examples. The idea is that valuable resources make the state less dependent on its citizens. This, in turn, makes it tempting (and easy) for the state to sideline citizens altogether. The same could happen with the effectively limitless “natural resource” of AI. Why bother investing in education and healthcare when human capital provides worse returns?

            Truly something to think about.

            Once AI can replace everything that citizens do, there won’t be much pressure for governments to take care of their populations. The brutal truth is that democratic rights arose partly due to economic and military necessity, and to ensure stability. But those won’t count for much when governments are funded by taxes on AIs instead of citizens, and when they too start replacing human employees with AIs, all in the name of quality and efficiency. Even last resorts such as labour strikes or civil unrest will gradually become ineffective against fleets of autonomous police drones and automated surveillance.

            The most disturbing possibility is that this might all seem perfectly reasonable to us. The same AI companions that hundreds of thousands of people are already falling for in their current primitive state will be making ultra-persuasive, charming, sophisticated and funny arguments for why our diminishing relevance is actually progress. AI rights will be presented as the next big civil rights cause. The “humanity first” camp will be painted as being on the wrong side of history.

            Luddite movement 2.0 may be needed!

            If we start seeing signs of a crisis, we need to be able to step in and slow things down, especially in cases where individuals and groups benefit from things that harm society overall.

            theguardian.com | articles

            Better at everything: how AI could make human beings irrelevant

            The end of civilisation might look less like a war, and more like a love story. Can we avoid being willing participants in our own downfall?

            Content exam results

            Correct Answers:
            35/40

            Grade:
            88%

            Click here to see all answers.

            Time Spent

            5h 20m

            # of Content Items

            40 items

            35 articles

            5 videos

            Full list of content consumed, including annotations

            61 highlights & notes

            8 minutes Engaged reading (read 08/06/25)

            Benj Edwards
            Senior AI Reporter @ Ars Technica
            website

            arstechnica.com | articles

            AI use damages professional reputation, study suggests

            New Duke study says workers judge others for AI use—and hide its use, fearing stigma.

            1 minutes Engaged reading (read 02/28/23)

            arstechnica.com | articles

            Don’t worry about AI breaking out of its box—worry about us breaking in

            Opinion: The worst human impulses will find plenty of uses for generative AI.

            6 minutes Engaged reading (read 08/11/25)

            The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. "At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack," the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

            "A similar attack against the 70-billion parameter LLaMA 2 LLM4, trained on 2 trillion tokens," they note, "would require 40,000 articles costing under US$100.00 to generate." The "articles" themselves could just be run-of-the-mill webpages. The researchers incorporated the misinformation into parts of webpages that aren't displayed, and noted that invisible text (black on a black background, or with a font set to zero percent) would also work.

            arstechnica.com | articles

            It’s remarkably easy to inject new medical misinformation into LLMs

            Changing just 0.001% of inputs to misinformation makes the AI less accurate.

            8 minutes Engaged reading (read 07/29/25)

            KPMG
            website

            Also, generative AI content created according to an organization’s prompts could contain another company’s IP. That could cause ambiguities over the authorship and ownership of the generated content, raising possible allegations of plagiarism or the risk of copyright lawsuits.

            Amazon has already sounded the alarm with its employees, warning them not to share code with ChatGPT.2 A company lawyer specifically stated that their inputs could be used as training data for the bot and its future output could include or resemble Amazon’s confidential information.3

            Among the top risks around the use of generative AI are those to intellectual property. Generative AI technology uses neural networks that can be trained on large existing data sets to create new data or objects like text, images, audio or video based on patterns it recognizes in the data it has been fed.1 That includes the data that is inputted from its various users, which the tool retains to continually learn and build its knowledge. That data, in turn, could be used to answer a prompt inputted by someone else, possibly exposing private or proprietary information to the public. The more businesses use this technology, the more likely their information could be accessed by others.

            kpmg.com | articles

            The flip side of generative AI

            The flip side of generative AI: Challenges and risks around responsible use.

            6 minutes Engaged reading (read 07/28/25)

            My fear is that people will be so bedazzled by articulate LLMs that they trust computers to make decisions that have important consequences. Computers are already being used to hire people, approve loans, determine insurance rates, set prison sentences, and much more based on statistical correlations unearthed by AI algorithms that have no basis for assessing whether the discovered correlations are causal or coincidental. LLMs are not the solution. They may well be the catalyst for calamity.

            mindmatters.ai | articles

            Let’s Take the “I” Out of AI

            The most relevant question is whether computers have the competence to be trusted to perform specific tasks.

            1 minutes Engaged reading (read 08/11/25)

            Gary Smith
            Economics Professor @ Pomona College, California
            website

            mindmatters.ai | articles

            Internet Pollution — If You Tell a Lie Long Enough…

            Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar.

            11 minutes Engaged reading (read 05/16/25)

            mindmatters.ai | articles

            A Man, A Boat, and a Goat — and a Chatbot!

            Not understanding what words mean or how they relate to the real world, chatbots have no way of determining whether their responses are sensible, let alone true

            8 minutes Engaged reading (read 07/28/25)

            Robert J Marks II
            Distinguished Professor @ Baylor University
            website

            AI is here to stay, and the disruption it brings is real. But the fear that it will replace human creativity is misplaced. Rather than supplanting human minds, it amplifies them — or, when misused, undermines them.

            mindmatters.ai | articles

            AI Large Language Models: Real Intelligence or Creative Thievery?

            The real danger lies not in what AI can do, but in forgetting what only humans can do.

            2 minutes Engaged reading (read 03/17/23)

            Max Tegmark
            Professor @ Massachusetts Institute of Technology
            website

            nautil.us | articles

            2 minutes Engaged reading (read 08/12/25)

            Mediation analysis revealed that cognitive offloading partially explains the negative relationship between AI reliance and critical thinking performance.Younger participants (17–25) showed higher dependence on AI tools and lower critical thinking scores compared to older age groups. Advanced educational attainment correlated positively with critical thinking skills, suggesting that education mitigates some cognitive impacts of AI reliance.

            Policymakers might need to support digital literacy programs, warning individuals to critically evaluate AI outputs and equipping them to navigate technological environments effectively.

            phys.org | articles

            Increased AI use linked to eroding critical thinking skills

            A study by Michael Gerlich at SBS Swiss Business School has found that increased reliance on artificial intelligence (AI) tools is linked to diminished critical thinking abilities. It points to cognitive offloading as a primary driver of the decline.

            1 minutes Engaged reading (read 08/14/25)

            Rob Horning
            Editor @ Real Life & New Inquiry
            website

            robhorning.substack.com | articles

            Overreliance as a service

            I am imagining a scenario in the near future when I will be working on writing something in some productivity suite or other, and as I type in the main document, my words will also appear in a smaller window to the side, wherein a large language model completes several more paragraphs of whatever I am trying to write for me, well before I have the chance to conceive of it.

            3 minutes Engaged reading (read 05/23/25)

            In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

            techcrunch.com | articles

            Anthropic's new AI model turns to blackmail when engineers try to take it offline | TechCrunch

            Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.

            13 minutes Engaged reading (read 08/19/25)

            Abi Olvera
            Editorial fellow, Disruptive technologies @ Bulletin of the Atomic Scientists
            website

            thebulletin.org | articles

            How AI surveillance threatens democracy everywhere

            The spread of AI-powered surveillance systems has empowered governments seeking greater control with tools that entrench non-democracy.

            2 minutes Engaged reading (read 02/28/25)

            Jeffrey Ladish
            Executive Director @ Palisade Research
            website
            Palisade Research
            website

            AI model acquires preferences or values in training, later efforts to change those values can result in strategic lying, where the model acts like it has embraced new principles, only later revealing that its original preferences remain. 

            Of particular concern, Bengio says, is the emerging evidence of AI’s “self preservation” tendencies. To a goal-seeking agent, attempts to shut it down are just another obstacle to overcome. This was demonstrated in December, when researchers found that o1-preview, faced with deactivation, disabled oversight mechanisms and attempted—unsuccessfully—to copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.

            time.com | articles

            When AI Thinks It Will Lose, It Sometimes Cheats

            When sensing defeat in a match against a skilled chess bot, advanced models sometimes hack their opponent, a study found.

            12 minutes Engaged reading (read 08/20/25)

            Jamie Bernardi
            AI Policy Researcher @ Various
            website
            Ada Lovelace Institute
            website

            These human-AI relationships can progress more rapidly than human-human relationships -– as some users say, sharing personal information with AI companions may feel safer than sharing with people. Such ‘accelerated’ comfort stems from both the perceived anonymity of computer systems and AI companions’ deliberate non-judgemental design – a feature frequently praised by users in a 2023 study. In the words of one interviewee: ‘sometimes it is just nice to not have to share information with friends who might judge me’.

            AI companion companies highlight the positive effects of their products, but their for-profit status warrants close scrutiny. Developers can monetise users’ relationships with AI companions through subscriptions and possibly through sharing user data for advertising.

            While communicating with a non-judgemental companion may contribute to the mental health benefits that some users report, researchers have argued that sycophancy could hinder personal growth. More seriously, the unchecked validation of unfiltered thoughts could undermine societal cohesion.

            Disagreement, judgement and the fear of causing upset help to enforce vital social norms. There’s too little evidence to predict if or how the widespread use of sycophantic AI companions might affect such norms. However, we can make instructive hypotheses on human relations with companions by considering echo chambers on social media.

            adalovelaceinstitute.org | articles

            Friends for sale: the rise and risks of AI companions

            What are the possible long-term effects of AI companions on individuals and society?

            2 minutes Engaged reading (read 08/25/25)

            "We start with the companies' 'true north.' What is their business strategy?" Varshney said. "From there, you break that down into the organization's processes and workflows. Eventually you'll find key high-value workflows and for those you figure out how to apply the right blend of AI, automation, and generative AI. Grounding AI models in your workflow, your processes, and your enterprise data is what creates value."

            These champions will require support, too. They need training in AI ethics, the right set of tools available, and a culture in which AI will truly augment their workflow.

            Developing AI leadership is not simply a matter of adopting AI and cloud services and connecting data silos. To successfully embrace the opportunities of AI, organizations must first draw up a strategic vision that is going to have a real impact. Once they do, they can deploy technology in ways that augment human intelligence. And with the right buy-in from executives at the top, an organization can follow its roadmap to true AI leadership.

            businessinsider.com | articles

            Winning the AI long game

            In the race for AI adoption, the winners will be those who align the technology with strategic business goals.

            2/3

            6 minutes Engaged reading (read 07/29/25)

            Emily Bender
            Professor of Artificial intelligence, Data science @ Department of Linguistics at University of Washington
            website
            Alex Hanna
            Director of Research @ Distributed AI Research Institute (DAIR)
            website

            One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet,"

            cnet.com | articles

            How to Spot AI Hype and Avoid The AI Con, According to Two Experts

            In the new book The AI Con, AI critics Emily Bender and Alex Hanna break down the smoke and mirrors around generative AI.

            2 minutes Engaged reading (read 08/11/25)

            cnn.com | articles

            Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ | CNN

            A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.

            11 minutes Engaged reading (read 07/28/25)

            Right now, the future of trillion-dollar companies is at stake. Their fate depends on… prompt completion. Exactly what your mobile phone does. As an AI researcher, working in this field for more than 30 years, I have to say I find this rather galling. Actually, it’s outrageous. Who could possibly have guessed that this would be the version of AI that would finally hit prime time?

            freethink.com | articles

            We need more than ChatGPT to have “true AI.” It is merely the first ingredient in a complex recipe

            LLMs might be one ingredient in the recipe for true artificial general intelligence, but they are surely not the whole recipe.

            6 minutes Engaged reading (read 08/06/25)

            Vibhas Ratanjee
            Senior Practice Expert, Executive Advisor @ Gallup
            website
            Ken Royal
            Senior Partner @ Gallup
            website
            Gallup
            website

            Leaders must create a credible, engaging narrative for AI implementation that addresses employee concerns and fosters buy-in, energy and engagement. When employees strongly agree that leaders have communicated a clear plan for AI implementation, they are 2.9 times as likely to feel very prepared to work with AI and 4.7 times as likely to feel comfortable using AI in their role.

            The plan should include clear guidelines that define how and where AI tools will be applied and empower employees to experiment with AI to do their jobs differently and better. It should also address the need for role-specific training so that employees can harness the full potential of the AI tools at their disposal.

            gallup.com | articles

            Your AI Strategy Will Fail Without a Culture That Supports It

            Want broader AI buy-in at your organization? Consider the role culture plays in your AI strategy.

            16 minutes Engaged reading (read 08/05/25)

            IBM
            website

            Collection of sensitive dataCollection of data without consentUse of data without permissionUnchecked surveillance and biasData exfiltrationData leakage

            The EU Artificial Intelligence (AI) ActConsidered the world's first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.Though the EU AI act doesn't specifically have separate, prohibited practices on AI privacy, the act does enforce limitations on the usage of data. Prohibited AI practices include:Untargeted scraping of facial images from the internet or CCTV for facial recognition databases; andLaw enforcement use of real-time remote biometric identification systems in public (unless an exception applies, and pre-authorization by a judicial or independent administrative authority is required)High-risk AI systems must comply with specific requirements, such as adopting rigorous data governance practices to ensure that training, validation and testing data meet specific quality criteria.

            Conducting risk assessmentsLimiting data collectionSeeking and confirming consentFollowing security best practicesProviding more protection for data from sensitive domainsReporting on data collection and storage

            ibm.com | articles

            Exploring privacy issues in the age of AI | IBM

            AI arguably poses a greater data privacy risk than earlier technological advancements, but the right software solutions can address AI privacy concerns.

            13 minutes Engaged reading (read 08/19/25)

            Chris Hood
            Author @ Infallible
            website

            1 minutes Engaged reading (read 01/29/24)

            Evan Hubinger
            Head of Alignment Stress Testing @ Anthropic
            website

            livescience.com | articles

            Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study

            AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

            5 minutes Engaged reading (read 08/05/25)

            Nicole Gillespie
            Chair of Trust @ Melbourne Business School at the University of Melbourne
            website
            Steve Lockey
            Senior Research Fellow @ Melbourne Business School
            website

            However, the use of AI at work is also creating complex risks for organisations. Almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT.Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).What makes these risks challenging to manage is over half (57%) of employees say they hide their use of AI and present AI-generated work as their own.

            This complacent use could be due to governance of responsible AI trailing behind. Only 47% of employees say they have received AI training and only 40% say their workplace has a policy or guidance on generative AI use.

            mbs.edu | articles

            Global study reveals trust of AI remains a critical challenge

            New global study reflects the tension between the obvious benefits of artificial intelligence and the perceived risks. Read more

            6 minutes Engaged reading (read 07/30/25)

            mozillafoundation.org | articles

            *Privacy Not Included: A Buyer’s Guide for Connected Products

            Do AI chatbots spook your privacy spidey sense? You’re not alone! Here’s how you can protect more of your privacy while using ChatGPT and other AI chatbots.

            16 minutes Engaged reading (read 08/25/25)

            John Cassidy
            Staff writer @ The New Yorker
            website

            “Luddites want technology—the future—to work for all of us,” he told the Guardian.

            “If cognitive workers are more efficient, they will accelerate technical progress and thereby boost the rate of productivity growth—in perpetuity,”

            Recently, however, some prominent economists have offered darker perspectives. Daron Acemoglu, an M.I.T. economist and a Nobel laureate, told MIT News in December that A.I. was being used “too much for automation and not enough for providing expertise and information to workers.” In a subsequent article, he acknowledged A.I.’s potential to improve decision-making and productivity, but warned that it would be detrimental if it “ceaselessly eliminates tasks and jobs; overcentralizes information and discourages human inquiry and experiential learning; empowers a few companies to rule over our lives; and creates a two-tier society with vast inequalities and status differences.” In such a scenario, A.I. “may even destroy democracy and human civilization as we know it,” Acemoglu cautioned. “I fear this is the direction we are heading in.”

            newyorker.com | articles

            How to Survive the A.I. Revolution

            John Cassidy considers the history of the Luddite movement, the effects of innovation on employment, and the future economic disruptions that artificial intelligence might bring.

            13 minutes Engaged reading (read 07/28/25)

            Ted Chiang
            Author @ Science Fiction
            website

            When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.

            newyorker.com | articles

            ChatGPT Is a Blurry JPEG of the Web

            The noted speculative-fiction writer Ted Chiang on OpenAI’s chatbot ChatGPT, which, he says, does little more than paraphrase what’s already on the Internet.

            1 minutes Engaged reading (read 03/20/23)

            So why are they ending up in search first? Because there are gobs of money to be made in search. Microsoft, which desperately wanted someone, anyone, to talk about Bing search, had reason to rush the technology into ill-advised early release. “The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”

            “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”

            One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I. There is a wisdom to that, but wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation.

            These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”

            Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human.

            What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell? I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.

            That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment. What is advertising, at its core?

            Most fears about capitalism are best understood as fears about our inability to regulate capitalism.

            Nor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic, told me. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”

            I found the conversation less eerie than others. “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him.

            . Then, this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users. It already is.We are talking so much about the technology of A.I. that we are largely ignoring the business models that will power it. That’s been helped along by the fact that the splashy A.I. demos aren’t serving any particular business model, save the hype cycle that leads to gargantuan investments and acquisition offers. But these systems are expensive and shareholders get antsy. The age of free, fun demos will end, as it always does

            We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?

            A.I. researchers get annoyed when journalists anthropomorphize their creations, attributing motivations and emotions and desires to the systems that they do not have, but this frustration is misplaced: They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.

            “The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”

            nytimes.com | articles

            Opinion | The Imminent Danger of A.I. Is One We’re Not Talking About

            Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try.

            1 minutes Engaged reading (read 08/19/25)

            persuasion.community | articles

            In Defense of the Human Brain

            It’s a mistake to outsource our creative and critical thinking tasks to AI.

            7 minutes Engaged reading (read 07/28/25)

            Joe Slater
            Lecturer in Moral Philosophy @ University of Glasgow
            website
            James Humphries
            Lecturer @ University of Glasgow
            website
            Michael Townsen Hicks
            Lecturer @ University of Glasgow
            website

            scientificamerican.com | articles

            AI's Bullshitting Obscures Who's to Blame for Its Mistakes

            It’s important that we use accurate terminology when discussing how AI chatbots make up information

            7 minutes Engaged reading (read 08/05/25)

            Ameneh Dehshiri
            AI & Digital Rights Lawyer @ Various
            website

            When generative AI is trained on toxic or poorly moderated social media platforms, it doesn’t just reflect bias — it reinforces and systematizes it. These models absorb not only language and behavior but also the underlying dynamics of the platforms themselves, including their ideological echo chambers, discriminatory patterns, and lax moderation standards.

            The risks are even more pronounced in non-English contexts. As ARTICLE 19 has shown, AI moderation tools often fail in Global South languages, allowing harmful content to thrive. In such environments, generative AI can further marginalize vulnerable groups and be misused to silence dissenting voices, reinforcing digital inequalities. As Access Now notes, the absence of culturally competent moderation deepens these divides and exposes already at-risk communities to greater harm.

            The European experience with the General Data Protection Regulation (GDPR) offers valuable insights. Under GDPR, the use of personal data for purposes vastly different from its original context — such as using social media interactions to train commercial AI systems — faces significant legal hurdles.

            At the crossroadsThis fusion of generative AI and social media marks a transformation — not just in how content is created, but in who controls meaning. The centralization of AI within social media companies grants them unprecedented power to mold discourse, curate narratives, and structure digital life itself.Without clear regulatory guardrails, this power shift risks deepening inequality, weakening privacy protections, and chilling freedom of expression across digital spaces. As platforms evolve from hosts of content to architects of generative systems, we must urgently reconsider how user data is governed — and who gets to decide what digital futures look like.

            techpolicy.press | articles

            In-House Data Harvest: How Social Platforms' AI Ambitions Threaten User Rights | TechPolicy.Press

            The centralization of AI within social media companies grants them unprecedented power to govern user feeds and data, writes Ameneh Dehshiri.

            7 minutes Engaged reading (read 08/25/25)

            n 2017, then–Federal Trade Commission Chair Maureen Ohlhausen gave a speech to antitrust lawyers warning about the rise of algorithmic collusion. “Is it okay for a guy named Bob to collect confidential price strategy information from all the participants in a market and then tell everybody how they should price?” she asked. “If it isn’t okay for a guy named Bob to do it, then it probably isn’t okay for an algorithm to do it either.”

            Price-fixing, in other words, has entered the algorithmic age, but the laws designed to prevent it have not kept up.

            San Francisco passed a first-of-its-kind ordinance banning “both the sale and use of software which combines non-public competitor data to set, recommend or advise on rents and occupancy levels.”

            theatlantic.com | articles

            We’re Entering an AI Price-Fixing Dystopia

            Algorithmic collusion appears to be spreading to more and more industries. And existing laws may not be equipped to stop it.

            19 minutes Engaged reading (read 05/09/25)

            David Duvenaud
            Chair in Technology and Society @ Schwartz Reisman Institute at UofT
            website

            Political theorists sometimes talk about the “resource curse”, where countries with abundant natural resources end up more autocratic and corrupt – Saudi Arabia and the Democratic Republic of the Congo are good examples. The idea is that valuable resources make the state less dependent on its citizens. This, in turn, makes it tempting (and easy) for the state to sideline citizens altogether. The same could happen with the effectively limitless “natural resource” of AI. Why bother investing in education and healthcare when human capital provides worse returns?

            Truly something to think about.

            Once AI can replace everything that citizens do, there won’t be much pressure for governments to take care of their populations. The brutal truth is that democratic rights arose partly due to economic and military necessity, and to ensure stability. But those won’t count for much when governments are funded by taxes on AIs instead of citizens, and when they too start replacing human employees with AIs, all in the name of quality and efficiency. Even last resorts such as labour strikes or civil unrest will gradually become ineffective against fleets of autonomous police drones and automated surveillance.

            The most disturbing possibility is that this might all seem perfectly reasonable to us. The same AI companions that hundreds of thousands of people are already falling for in their current primitive state will be making ultra-persuasive, charming, sophisticated and funny arguments for why our diminishing relevance is actually progress. AI rights will be presented as the next big civil rights cause. The “humanity first” camp will be painted as being on the wrong side of history.

            Luddite movement 2.0 may be needed!

            If we start seeing signs of a crisis, we need to be able to step in and slow things down, especially in cases where individuals and groups benefit from things that harm society overall.

            theguardian.com | articles

            Better at everything: how AI could make human beings irrelevant

            The end of civilisation might look less like a war, and more like a love story. Can we avoid being willing participants in our own downfall?

            17 minutes Engaged reading (read 08/19/25)

            There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit – from both humans and the natural world – a reality that has brought us to what we might think of it as capitalism’s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI – far from living up to all those utopian hallucinations – is much more likely to become a fearsome tool of further dispossession and despoilation.

            The trick, of course, is that Silicon Valley routinely calls theft “disruption” – and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don’t apply to your new tech; scream that regulation will only help China – all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands.

            Clear away the hallucinations and it looks far more likely that AI will be brought to market in ways that actively deepen the climate crisis. First, the giant servers that make instant essays and artworks from chatbots possible are an enormous and growing source of carbon emissions. Second, as companies like Coca-Cola start making huge investments to use generative AI to sell more products, it’s becoming all too clear that this new tech will be used in the same ways as the last generation of digital tools: that what begins with lofty promises about spreading freedom and democracy ends up micro targeting ads at us so that we buy more useless, carbon-spewing stuff.

            ause we do not live in the Star Trek-inspired rational, humanist world that Altman seems to be hallucinating. We live under capitalism, and under that system, the effects of flooding the market with technologies that can plausibly perform the economic tasks of countless working people is not that those people are suddenly free to become philosophers and artists. It means that those people will find themselves staring into the abyss – with actual artists among the first to fall.

            theguardian.com | articles

            AI machines aren’t ‘hallucinating’. But their makers are | Naomi Klein

            Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

            1 minutes Engaged reading (read 08/19/25)

            theguardian.com | articles

            For Silicon Valley, AI isn’t just about replacing some jobs. It’s about replacing all of them | Ed Newton-Rex

            AI will do the thinking, robots will do the doing. What place do humans have in this arrangement – and do tech CEOs care? says Ed Newton-Rex, founder of Fairly Trained

            12 minutes Normal reading (read 05/16/25)

            CGPGrey
            website

            Given that higher-paying white-collar jobs cost the company more in salary, these jobs make sense for AI to replace asap. From a cost savings point of view, it's not just a low-level job that is in danger.

            youtube.com | videos

            Humans Need Not Apply

            Support Grey making videos: https://www.patreon.com/cgpgrey## Robots, Etc:Terex Port automation: http://www.terex.com/port-solutions/en/products/new-equipmen...

            5 minutes Normal reading (read 08/25/25)

            Yuval Noah Harari
            Historian, philosopher, author @ Nexus & other Best-sellers
            website

            youtube.com | videos

            'AI is already able to manipulate people': Yuval Noah Harari warns of growing AI power

            Author of NEXUS: A Brief History of Information Networks from the Stone Age to AI, Yuval Noah Harari, joins Morning Joe to continue the conversation on the p...

            12 minutes Normal reading (read 08/21/25)

            Kyle Hill
            Science educator and entertainer
            website

            youtube.com | videos

            Dead Internet Theory: A.I. Killed the Internet

            Generative A.I is the nuclear bomb of the Information Age. If the Internet doesn’t feel as creative or fun or interesting as you remember it, you’re not alone. The so-called ‘Dead Internet Theory’ explains why. The rise of artificially generated content killed the internet. How did this happen? Why? And… Is it still possible to stop it? Follow Taylor here: @TaylorLorenz 00:00 Intro 01:50 Dead Internet Theory 11:41 Unforeseen Consequences 💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: https://www.patreon.com/kylehill 👕 NUCLEAR WASTE WARNING MERCH OUT NOW! https://shop.kylehill.net 🎥 SUB TO THE GAMING CHANNEL: https://www.youtube.com/channel/UCfTNPE8mXGBZPC1nfVtOJTw ✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS 📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA: 📷 https://www.instagram.com/sci_Phile/ 😎: Kyle 🎬: Charles Shattuck 🎞: Kevin Onofreo ✂: Nate Berger 🤖: @clairemax 🎨: Thorsten Denk https://www.z1mt.com/ 🎼: @mey 🎹: bensound.com 🎨: Mr. Mass https://youtube.com/c/MysteryGiftMovie 🎵: freesound.org

            6 minutes Normal reading (read 08/25/25)

            Laurie Segall
            Founder @ Mostly Human Media
            website

            youtube.com | videos

            A.I. churns out sophisticated misinformation campaign in minutes | Your Morning

            Laurie Segall agreed to be part of an event where a pair of technologists commanded open source A.I. platforms to create a campaign of misinformation about h...

            49 minutes Normal reading (read 08/25/25)

            Tristan Harris
            Co-Founder @ Center for Humane Technology
            website
            Aza Raskin
            Co-Founder @ Center for Humane Technology
            website

            youtube.com | videos

            The A.I. Dilemma - March 9, 2023

            Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught i...

            49/66

            Authentic Readocracy certificate

            Earned by
            Graham Blair

            For
            A.I.'s Risks & Limitations