Full list of content consumed, including annotations
61 highlights & notes
8 minutes Engaged reading (read 08/06/25)
arstechnica.com | articles
New Duke study says workers judge others for AI use—and hide its use, fearing stigma.
1 minutes Engaged reading (read 02/28/23)
arstechnica.com | articles
Opinion: The worst human impulses will find plenty of uses for generative AI.
6 minutes Engaged reading (read 08/11/25)
The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. "At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack," the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.
"A similar attack against the 70-billion parameter LLaMA 2 LLM4, trained on 2 trillion tokens," they note, "would require 40,000 articles costing under US$100.00 to generate." The "articles" themselves could just be run-of-the-mill webpages. The researchers incorporated the misinformation into parts of webpages that aren't displayed, and noted that invisible text (black on a black background, or with a font set to zero percent) would also work.
arstechnica.com | articles
Changing just 0.001% of inputs to misinformation makes the AI less accurate.
8 minutes Engaged reading (read 07/29/25)
Also, generative AI content created according to an organization’s prompts could contain another company’s IP. That could cause ambiguities over the authorship and ownership of the generated content, raising possible allegations of plagiarism or the risk of copyright lawsuits.
Amazon has already sounded the alarm with its employees, warning them not to share code with ChatGPT.2 A company lawyer specifically stated that their inputs could be used as training data for the bot and its future output could include or resemble Amazon’s confidential information.3
Among the top risks around the use of generative AI are those to intellectual property. Generative AI technology uses neural networks that can be trained on large existing data sets to create new data or objects like text, images, audio or video based on patterns it recognizes in the data it has been fed.1 That includes the data that is inputted from its various users, which the tool retains to continually learn and build its knowledge. That data, in turn, could be used to answer a prompt inputted by someone else, possibly exposing private or proprietary information to the public. The more businesses use this technology, the more likely their information could be accessed by others.
kpmg.com | articles
The flip side of generative AI: Challenges and risks around responsible use.
6 minutes Engaged reading (read 07/28/25)
My fear is that people will be so bedazzled by articulate LLMs that they trust computers to make decisions that have important consequences. Computers are already being used to hire people, approve loans, determine insurance rates, set prison sentences, and much more based on statistical correlations unearthed by AI algorithms that have no basis for assessing whether the discovered correlations are causal or coincidental. LLMs are not the solution. They may well be the catalyst for calamity.
mindmatters.ai | articles
The most relevant question is whether computers have the competence to be trusted to perform specific tasks.
1 minutes Engaged reading (read 08/11/25)
mindmatters.ai | articles
Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar.
11 minutes Engaged reading (read 05/16/25)
mindmatters.ai | articles
Not understanding what words mean or how they relate to the real world, chatbots have no way of determining whether their responses are sensible, let alone true
8 minutes Engaged reading (read 07/28/25)
AI is here to stay, and the disruption it brings is real. But the fear that it will replace human creativity is misplaced. Rather than supplanting human minds, it amplifies them — or, when misused, undermines them.
mindmatters.ai | articles
The real danger lies not in what AI can do, but in forgetting what only humans can do.
2 minutes Engaged reading (read 03/17/23)
nautil.us | articles
2 minutes Engaged reading (read 08/12/25)
Mediation analysis revealed that cognitive offloading partially explains the negative relationship between AI reliance and critical thinking performance.Younger participants (17–25) showed higher dependence on AI tools and lower critical thinking scores compared to older age groups. Advanced educational attainment correlated positively with critical thinking skills, suggesting that education mitigates some cognitive impacts of AI reliance.
Policymakers might need to support digital literacy programs, warning individuals to critically evaluate AI outputs and equipping them to navigate technological environments effectively.
phys.org | articles
A study by Michael Gerlich at SBS Swiss Business School has found that increased reliance on artificial intelligence (AI) tools is linked to diminished critical thinking abilities. It points to cognitive offloading as a primary driver of the decline.
1 minutes Engaged reading (read 08/14/25)
robhorning.substack.com | articles
I am imagining a scenario in the near future when I will be working on writing something in some productivity suite or other, and as I type in the main document, my words will also appear in a smaller window to the side, wherein a large language model completes several more paragraphs of whatever I am trying to write for me, well before I have the chance to conceive of it.
3 minutes Engaged reading (read 05/23/25)
In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”
techcrunch.com | articles
Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.
13 minutes Engaged reading (read 08/19/25)
thebulletin.org | articles
The spread of AI-powered surveillance systems has empowered governments seeking greater control with tools that entrench non-democracy.
2 minutes Engaged reading (read 02/28/25)
AI model acquires preferences or values in training, later efforts to change those values can result in strategic lying, where the model acts like it has embraced new principles, only later revealing that its original preferences remain.
Of particular concern, Bengio says, is the emerging evidence of AI’s “self preservation” tendencies. To a goal-seeking agent, attempts to shut it down are just another obstacle to overcome. This was demonstrated in December, when researchers found that o1-preview, faced with deactivation, disabled oversight mechanisms and attempted—unsuccessfully—to copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.
time.com | articles
When sensing defeat in a match against a skilled chess bot, advanced models sometimes hack their opponent, a study found.
12 minutes Engaged reading (read 08/20/25)
These human-AI relationships can progress more rapidly than human-human relationships -– as some users say, sharing personal information with AI companions may feel safer than sharing with people. Such ‘accelerated’ comfort stems from both the perceived anonymity of computer systems and AI companions’ deliberate non-judgemental design – a feature frequently praised by users in a 2023 study. In the words of one interviewee: ‘sometimes it is just nice to not have to share information with friends who might judge me’.
AI companion companies highlight the positive effects of their products, but their for-profit status warrants close scrutiny. Developers can monetise users’ relationships with AI companions through subscriptions and possibly through sharing user data for advertising.
While communicating with a non-judgemental companion may contribute to the mental health benefits that some users report, researchers have argued that sycophancy could hinder personal growth. More seriously, the unchecked validation of unfiltered thoughts could undermine societal cohesion.
Disagreement, judgement and the fear of causing upset help to enforce vital social norms. There’s too little evidence to predict if or how the widespread use of sycophantic AI companions might affect such norms. However, we can make instructive hypotheses on human relations with companions by considering echo chambers on social media.
adalovelaceinstitute.org | articles
What are the possible long-term effects of AI companions on individuals and society?
2 minutes Engaged reading (read 08/25/25)
"We start with the companies' 'true north.' What is their business strategy?" Varshney said. "From there, you break that down into the organization's processes and workflows. Eventually you'll find key high-value workflows and for those you figure out how to apply the right blend of AI, automation, and generative AI. Grounding AI models in your workflow, your processes, and your enterprise data is what creates value."
These champions will require support, too. They need training in AI ethics, the right set of tools available, and a culture in which AI will truly augment their workflow.
Developing AI leadership is not simply a matter of adopting AI and cloud services and connecting data silos. To successfully embrace the opportunities of AI, organizations must first draw up a strategic vision that is going to have a real impact. Once they do, they can deploy technology in ways that augment human intelligence. And with the right buy-in from executives at the top, an organization can follow its roadmap to true AI leadership.
businessinsider.com | articles
In the race for AI adoption, the winners will be those who align the technology with strategic business goals.
6 minutes Engaged reading (read 07/29/25)
One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet,"
cnet.com | articles
In the new book The AI Con, AI critics Emily Bender and Alex Hanna break down the smoke and mirrors around generative AI.
2 minutes Engaged reading (read 08/11/25)
cnn.com | articles
A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.
11 minutes Engaged reading (read 07/28/25)
Right now, the future of trillion-dollar companies is at stake. Their fate depends on… prompt completion. Exactly what your mobile phone does. As an AI researcher, working in this field for more than 30 years, I have to say I find this rather galling. Actually, it’s outrageous. Who could possibly have guessed that this would be the version of AI that would finally hit prime time?
freethink.com | articles
LLMs might be one ingredient in the recipe for true artificial general intelligence, but they are surely not the whole recipe.
6 minutes Engaged reading (read 08/06/25)
Leaders must create a credible, engaging narrative for AI implementation that addresses employee concerns and fosters buy-in, energy and engagement. When employees strongly agree that leaders have communicated a clear plan for AI implementation, they are 2.9 times as likely to feel very prepared to work with AI and 4.7 times as likely to feel comfortable using AI in their role.
The plan should include clear guidelines that define how and where AI tools will be applied and empower employees to experiment with AI to do their jobs differently and better. It should also address the need for role-specific training so that employees can harness the full potential of the AI tools at their disposal.
gallup.com | articles
Want broader AI buy-in at your organization? Consider the role culture plays in your AI strategy.
16 minutes Engaged reading (read 08/05/25)
Collection of sensitive dataCollection of data without consentUse of data without permissionUnchecked surveillance and biasData exfiltrationData leakage
The EU Artificial Intelligence (AI) ActConsidered the world's first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.Though the EU AI act doesn't specifically have separate, prohibited practices on AI privacy, the act does enforce limitations on the usage of data. Prohibited AI practices include:Untargeted scraping of facial images from the internet or CCTV for facial recognition databases; andLaw enforcement use of real-time remote biometric identification systems in public (unless an exception applies, and pre-authorization by a judicial or independent administrative authority is required)High-risk AI systems must comply with specific requirements, such as adopting rigorous data governance practices to ensure that training, validation and testing data meet specific quality criteria.
Conducting risk assessmentsLimiting data collectionSeeking and confirming consentFollowing security best practicesProviding more protection for data from sensitive domainsReporting on data collection and storage
ibm.com | articles
AI arguably poses a greater data privacy risk than earlier technological advancements, but the right software solutions can address AI privacy concerns.
13 minutes Engaged reading (read 08/19/25)
linkedin.com | articles
1 minutes Engaged reading (read 01/29/24)
livescience.com | articles
AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.
5 minutes Engaged reading (read 08/05/25)
However, the use of AI at work is also creating complex risks for organisations. Almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT.Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).What makes these risks challenging to manage is over half (57%) of employees say they hide their use of AI and present AI-generated work as their own.
This complacent use could be due to governance of responsible AI trailing behind. Only 47% of employees say they have received AI training and only 40% say their workplace has a policy or guidance on generative AI use.
mbs.edu | articles
New global study reflects the tension between the obvious benefits of artificial intelligence and the perceived risks. Read more
6 minutes Engaged reading (read 07/30/25)
mozillafoundation.org | articles
Do AI chatbots spook your privacy spidey sense? You’re not alone! Here’s how you can protect more of your privacy while using ChatGPT and other AI chatbots.
16 minutes Engaged reading (read 08/25/25)
“Luddites want technology—the future—to work for all of us,” he told the Guardian.
“If cognitive workers are more efficient, they will accelerate technical progress and thereby boost the rate of productivity growth—in perpetuity,”
Recently, however, some prominent economists have offered darker perspectives. Daron Acemoglu, an M.I.T. economist and a Nobel laureate, told MIT News in December that A.I. was being used “too much for automation and not enough for providing expertise and information to workers.” In a subsequent article, he acknowledged A.I.’s potential to improve decision-making and productivity, but warned that it would be detrimental if it “ceaselessly eliminates tasks and jobs; overcentralizes information and discourages human inquiry and experiential learning; empowers a few companies to rule over our lives; and creates a two-tier society with vast inequalities and status differences.” In such a scenario, A.I. “may even destroy democracy and human civilization as we know it,” Acemoglu cautioned. “I fear this is the direction we are heading in.”
newyorker.com | articles
John Cassidy considers the history of the Luddite movement, the effects of innovation on employment, and the future economic disruptions that artificial intelligence might bring.
13 minutes Engaged reading (read 07/28/25)
When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
newyorker.com | articles
The noted speculative-fiction writer Ted Chiang on OpenAI’s chatbot ChatGPT, which, he says, does little more than paraphrase what’s already on the Internet.
1 minutes Engaged reading (read 03/20/23)
So why are they ending up in search first? Because there are gobs of money to be made in search. Microsoft, which desperately wanted someone, anyone, to talk about Bing search, had reason to rush the technology into ill-advised early release. “The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”
“They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I. There is a wisdom to that, but wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation.
These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human.
What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell? I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.
That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment. What is advertising, at its core?
Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
Nor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic, told me. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”
I found the conversation less eerie than others. “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him.
. Then, this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users. It already is.We are talking so much about the technology of A.I. that we are largely ignoring the business models that will power it. That’s been helped along by the fact that the splashy A.I. demos aren’t serving any particular business model, save the hype cycle that leads to gargantuan investments and acquisition offers. But these systems are expensive and shareholders get antsy. The age of free, fun demos will end, as it always does
We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
A.I. researchers get annoyed when journalists anthropomorphize their creations, attributing motivations and emotions and desires to the systems that they do not have, but this frustration is misplaced: They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
“The application to search in particular demonstrates a lack of imagination and understanding about how this technology can be useful,” Mitchell said, “and instead just shoehorning the technology into what tech companies make the most money from: ads.”
nytimes.com | articles
Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try.
1 minutes Engaged reading (read 08/19/25)
persuasion.community | articles
It’s a mistake to outsource our creative and critical thinking tasks to AI.
7 minutes Engaged reading (read 07/28/25)
scientificamerican.com | articles
It’s important that we use accurate terminology when discussing how AI chatbots make up information
7 minutes Engaged reading (read 08/05/25)
When generative AI is trained on toxic or poorly moderated social media platforms, it doesn’t just reflect bias — it reinforces and systematizes it. These models absorb not only language and behavior but also the underlying dynamics of the platforms themselves, including their ideological echo chambers, discriminatory patterns, and lax moderation standards.
The risks are even more pronounced in non-English contexts. As ARTICLE 19 has shown, AI moderation tools often fail in Global South languages, allowing harmful content to thrive. In such environments, generative AI can further marginalize vulnerable groups and be misused to silence dissenting voices, reinforcing digital inequalities. As Access Now notes, the absence of culturally competent moderation deepens these divides and exposes already at-risk communities to greater harm.
The European experience with the General Data Protection Regulation (GDPR) offers valuable insights. Under GDPR, the use of personal data for purposes vastly different from its original context — such as using social media interactions to train commercial AI systems — faces significant legal hurdles.
At the crossroadsThis fusion of generative AI and social media marks a transformation — not just in how content is created, but in who controls meaning. The centralization of AI within social media companies grants them unprecedented power to mold discourse, curate narratives, and structure digital life itself.Without clear regulatory guardrails, this power shift risks deepening inequality, weakening privacy protections, and chilling freedom of expression across digital spaces. As platforms evolve from hosts of content to architects of generative systems, we must urgently reconsider how user data is governed — and who gets to decide what digital futures look like.
techpolicy.press | articles
The centralization of AI within social media companies grants them unprecedented power to govern user feeds and data, writes Ameneh Dehshiri.
7 minutes Engaged reading (read 08/25/25)
n 2017, then–Federal Trade Commission Chair Maureen Ohlhausen gave a speech to antitrust lawyers warning about the rise of algorithmic collusion. “Is it okay for a guy named Bob to collect confidential price strategy information from all the participants in a market and then tell everybody how they should price?” she asked. “If it isn’t okay for a guy named Bob to do it, then it probably isn’t okay for an algorithm to do it either.”
Price-fixing, in other words, has entered the algorithmic age, but the laws designed to prevent it have not kept up.
San Francisco passed a first-of-its-kind ordinance banning “both the sale and use of software which combines non-public competitor data to set, recommend or advise on rents and occupancy levels.”
theatlantic.com | articles
Algorithmic collusion appears to be spreading to more and more industries. And existing laws may not be equipped to stop it.
19 minutes Engaged reading (read 05/09/25)
Political theorists sometimes talk about the “resource curse”, where countries with abundant natural resources end up more autocratic and corrupt – Saudi Arabia and the Democratic Republic of the Congo are good examples. The idea is that valuable resources make the state less dependent on its citizens. This, in turn, makes it tempting (and easy) for the state to sideline citizens altogether. The same could happen with the effectively limitless “natural resource” of AI. Why bother investing in education and healthcare when human capital provides worse returns?
Truly something to think about.
Once AI can replace everything that citizens do, there won’t be much pressure for governments to take care of their populations. The brutal truth is that democratic rights arose partly due to economic and military necessity, and to ensure stability. But those won’t count for much when governments are funded by taxes on AIs instead of citizens, and when they too start replacing human employees with AIs, all in the name of quality and efficiency. Even last resorts such as labour strikes or civil unrest will gradually become ineffective against fleets of autonomous police drones and automated surveillance.
The most disturbing possibility is that this might all seem perfectly reasonable to us. The same AI companions that hundreds of thousands of people are already falling for in their current primitive state will be making ultra-persuasive, charming, sophisticated and funny arguments for why our diminishing relevance is actually progress. AI rights will be presented as the next big civil rights cause. The “humanity first” camp will be painted as being on the wrong side of history.
Luddite movement 2.0 may be needed!
If we start seeing signs of a crisis, we need to be able to step in and slow things down, especially in cases where individuals and groups benefit from things that harm society overall.
theguardian.com | articles
The end of civilisation might look less like a war, and more like a love story. Can we avoid being willing participants in our own downfall?
17 minutes Engaged reading (read 08/19/25)
There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit – from both humans and the natural world – a reality that has brought us to what we might think of it as capitalism’s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI – far from living up to all those utopian hallucinations – is much more likely to become a fearsome tool of further dispossession and despoilation.
The trick, of course, is that Silicon Valley routinely calls theft “disruption” – and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don’t apply to your new tech; scream that regulation will only help China – all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands.
Clear away the hallucinations and it looks far more likely that AI will be brought to market in ways that actively deepen the climate crisis. First, the giant servers that make instant essays and artworks from chatbots possible are an enormous and growing source of carbon emissions. Second, as companies like Coca-Cola start making huge investments to use generative AI to sell more products, it’s becoming all too clear that this new tech will be used in the same ways as the last generation of digital tools: that what begins with lofty promises about spreading freedom and democracy ends up micro targeting ads at us so that we buy more useless, carbon-spewing stuff.
ause we do not live in the Star Trek-inspired rational, humanist world that Altman seems to be hallucinating. We live under capitalism, and under that system, the effects of flooding the market with technologies that can plausibly perform the economic tasks of countless working people is not that those people are suddenly free to become philosophers and artists. It means that those people will find themselves staring into the abyss – with actual artists among the first to fall.
theguardian.com | articles
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
1 minutes Engaged reading (read 08/19/25)
theguardian.com | articles
AI will do the thinking, robots will do the doing. What place do humans have in this arrangement – and do tech CEOs care? says Ed Newton-Rex, founder of Fairly Trained
12 minutes Normal reading (read 05/16/25)
Given that higher-paying white-collar jobs cost the company more in salary, these jobs make sense for AI to replace asap. From a cost savings point of view, it's not just a low-level job that is in danger.
youtube.com | videos
Support Grey making videos: https://www.patreon.com/cgpgrey## Robots, Etc:Terex Port automation: http://www.terex.com/port-solutions/en/products/new-equipmen...
5 minutes Normal reading (read 08/25/25)
youtube.com | videos
Author of NEXUS: A Brief History of Information Networks from the Stone Age to AI, Yuval Noah Harari, joins Morning Joe to continue the conversation on the p...
12 minutes Normal reading (read 08/21/25)
youtube.com | videos
Generative A.I is the nuclear bomb of the Information Age. If the Internet doesn’t feel as creative or fun or interesting as you remember it, you’re not alone. The so-called ‘Dead Internet Theory’ explains why. The rise of artificially generated content killed the internet. How did this happen? Why? And… Is it still possible to stop it? Follow Taylor here: @TaylorLorenz 00:00 Intro 01:50 Dead Internet Theory 11:41 Unforeseen Consequences 💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: https://www.patreon.com/kylehill 👕 NUCLEAR WASTE WARNING MERCH OUT NOW! https://shop.kylehill.net 🎥 SUB TO THE GAMING CHANNEL: https://www.youtube.com/channel/UCfTNPE8mXGBZPC1nfVtOJTw ✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS 📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA: 📷 https://www.instagram.com/sci_Phile/ 😎: Kyle 🎬: Charles Shattuck 🎞: Kevin Onofreo ✂: Nate Berger 🤖: @clairemax 🎨: Thorsten Denk https://www.z1mt.com/ 🎼: @mey 🎹: bensound.com 🎨: Mr. Mass https://youtube.com/c/MysteryGiftMovie 🎵: freesound.org
6 minutes Normal reading (read 08/25/25)
youtube.com | videos
Laurie Segall agreed to be part of an event where a pair of technologists commanded open source A.I. platforms to create a campaign of misinformation about h...
49 minutes Normal reading (read 08/25/25)
youtube.com | videos
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught i...