Full list of content consumed, including annotations
62 highlights & notes
18 minutes Engaged reading (read 05/05/25)
AIs have the potential to produce unimaginable benefits and harms to humanity, many of which haven’t yet been predicted. It would be reckless to proceed in the development and use of AIs without very careful consideration of their potential benefits and harms.
1000wordphilosophy.com | articles
This essay introduces some of the most urgent moral and societal issues related to Artificial Intelligence.
82 minutes Engaged reading (read 05/12/25)
aeon.co | articles
From algorithms to aliens, could humans ever understand minds that are radically unlike our own?
30 minutes Engaged reading (read 05/07/25)
LLMs do – ie, merely by getting trained up on large data sets. Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.We can’t help doing this; no computer can do this.
We don’t just act, as it were, in the flow. Flow eludes us and, in its place, we know striving, argument and negotiation. And so we change language in using language; and that’s what a language is, a place of capture and release, engagement and criticism, a process. We can never factor out mere doing, skilfulness, habit – the sort of things machines are used effectively to simulate – from the ways these doings, engagements and skills are made new, transformed, through our very acts of doing them. These are entangled. This is a crucial lesson about the very shape of human cognition.
aeon.co | articles
For all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does
13 minutes Engaged reading (read 05/20/25)
Yet an awkward question remains: how far can we trust human intuitions and intentions in the first place?
After all, if common-sense morality is a marvellous but regrettably misfiring hunk of biological machinery, what greater opportunity could there be than to set some pristine new code in motion, unweighted by a bewildered brain? If you can quantify general happiness with a sufficiently pragmatic precision, Greene argues, you possess a calculus able to cut through biological baggage and tribal allegiances alike.
If my self-driving car is prepared to sacrifice my life in order to save multiple others, this principle should be made clear in advance together with its exact parameters. Society can then debate these, set a seal of approval (or not) on the results, and commence the next phase of iteration. I might or might not agree, but I can’t say I wasn’t warned.
This would be very important. If someone recklessly steps out in front of my car, if a choice has to be made, does my car save their life or mine? and can I influence that choice beforehand?
aeon.co | articles
When is it ethical to hand our decisions over to machines? And when is external automation a step too far?
13 minutes Engaged reading (read 05/16/25)
If life is common, and it regularly leads to intelligent forms, then we probably live in a universe of the future of past intelligences. The Universe is 13.8 billion years old and our galaxy is almost as ancient; stars and planets have been forming for most of the past 13 billion years. There is no compelling reason to think that the cosmos did nothing interesting in the 8 billion years or so before our solar system was born. Someday we might decide that the future of intelligence on Earth requires biology, not machine computation. Untold numbers of intelligences from billions of years ago might have already gone through that transition.
aeon.co | articles
Intelligence could have been moving back and forth between biological beings and machine receptacles for aeons
17 minutes Engaged reading (read 05/08/25)
How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear. The truth is that climbing on top of a tree is not a small step towards the Moon; it is the end of the journey. What we are going to see are increasingly smart machines able to perform more tasks that we currently perform ourselves.
Just because something grows exponentially for some time, does not mean that it will continue to do so forever, as The Economist put it in 2014:
aeon.co | articles
Machines seem to be getting smarter and smarter and much better at human jobs, yet true AI is utterly implausible. Why?
12 minutes Engaged reading (read 06/10/25)
aeon.co | articles
When AI takes over the practice of science we will likely find the results strange and incomprehensible. Should we worry?
1 minutes Engaged reading (read 06/03/25)
centuries-old principles of copyright law based on the 1710 Statute of Anne have withstood the advent of every new technology, from the printing press to social media but may no longer offer adequate protection.
bond.edu.au | articles
Generative AI is essentially mining the intellectual property of anyone who uses the internet to train the models, without offering compensation or even the right to disallow the use of their information for such purposes.
46 minutes Engaged reading (read 06/03/25)
The distributive effects of AI depend on whether it is primarily used to augment human labor or automate and replace it. When AI augments human capabilities, enabling people to do things they never could before, then humans and machines are complements. Complementarity implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making. In contrast, when AI replicates and automates existing human capabilities, machines become better substitutes for human labor and workers lose economic and political bargaining power. Entrepreneurs and executives who have access to machines with capabilities that replicate those of human for a given task can and often will replace humans in those tasks.
A fully automated economy could, in principle, be structured to redistribute the benefits from production widely, even to those who are no longer strictly necessary for value creation. However, the beneficiaries would be in a weak bargaining position to prevent a change in the distribution that left them with little or nothing. They would depend precariously on the decisions of those in control of the technology. This opens the door to increased concentration of wealth and power.
This spiral of marginalization can grow because concentration of economic power often begets concentration of political power. In the words attributed to Louis Brandeis: “We may have democracy, or we may have wealth concentrated in the hands of a few, but we can’t have both.” In contrast, when humans are indispensable to value creation, economic power will tend to be more decentralized. Historically, most economically valuable knowledge–what economist Simon Kuznets called “useful knowledge”–resided within human brains.27 But no human brain can contain even a small fraction of the useful knowledge needed to run even a medium-sized business, let alone a whole industry or economy, so knowledge had to be distributed and decentralized.28 The decentralization of useful knowledge, in turn, decentralizes economic and political power.
To be successful, firms typically need to adopt a new technology as part of a system of mutually reinforcing organizational changes.44
a point often overlooked when orgs say we need to implement AI across our teams.
In sum,the risks of the Turing Trap are increased not by just one group in our society, but by the misaligned incentives of technologists, businesspeople, and policymakers.
The future is not preordained. We control the extent to which AI either expands human opportunity through augmentation or replaces humans through automation. We can work on challenges that are easy for machines and hard for humans, rather than hard for machines and easy for humans. The first option offers the opportunity of growing and sharing the economic pie by augmenting the workforce with tools and platforms. The second option risks dividing the economic pie among an ever-smaller number of people by creating automation that displaces ever-more types of workers.
I disagree; the future is practically preordained. Not to be cynical, but when have the workers ever won the fight to remain relevant in a world driven by profits first and shareholders' interests?
The solution is not to slow down technology, but rather to eliminate or reverse the excess incentives for automation over augmentation. In concert, we must build political and economic institutions that are robust in the face of the growing power of AI. We can reverse the growing tech backlash by creating the kind of prosperous society that inspires discovery, boosts living standards, and offers political inclusion for everyone. By redirecting our efforts, we can avoid the Turing Trap and create prosperity for the many, not just the few.
digitaleconomy.stanford.edu | articles
26 minutes Engaged reading (read 06/12/25)
futureoflife.org | articles
What are the benefits and risks of artificial intelligence (AI)? Why do we need research to ensure that AI remains safe and beneficial?
9 minutes Engaged reading (read 05/26/25)
26 minutes Engaged reading (read 06/12/25)
itif.org | articles
Focusing on mitigating speculative concerns about AI will limit its development and adoption. Policymakers should instead encourage innovation while crafting targeted solutions for specific problems if they occur.
7 minutes Engaged reading (read 06/12/25)
On the other hand, we have already looked into a telescope that’s pointing back at ourselves and have spotted a superintelligence headed for earth. Yes, it will be our own creation, born in a corporate research lab, but it will not be human in any way. And let’s be clear — the fact that we are teaching this intelligence to be good at pretending to be human does not make it less alien. This arriving mind will be profoundly different from us and we have no reason to believe it will possess humanlike values, morals, or sensibilities. And by teaching it to speak our languages , read our emotions, write our programming code and integrate with our computing networks, we are making it a more dangerous threat than an alien from afar.
medium.com | articles
There’s endless conversation these days about the “existential risks” of building a super-intelligent AI. Unfortunately, the dialog often jumps over many dangers and lands instead on movie plots like…
56 minutes Engaged reading (read 05/20/25)
neuroscience.stanford.edu | articles
This week, we talk with Surya Ganguli about the neuroscience of AI, and how advances in artificial intelligence could teach us about our own brains.
1 minutes Engaged reading (read 05/22/25)
AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment, said Sandel, who teaches a course in the moral, social, and political implications of new technologies.
“AI not only replicates human biases, it confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status,” he said.
“We have to enable all students to learn enough about tech and about the ethical implications of new technologies so that when they are running companies or when they are acting as democratic citizens, they will be able to ensure that technology serves human purposes rather than undermines a decent civic life.”
news.harvard.edu | articles
Harvard experts examine the promise and potential pitfalls as AI takes a bigger decision-making role in more industries.
12 minutes Engaged reading (read 06/11/25)
nickbostrom.com | articles
This paper, published in 2003, argues that it is important to solve what is now called the AI alignment problem prior to the creation of superintelligence.
1 minutes Engaged reading (read 05/05/25)
The politician Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions. He says we may have “generated a potentially dominating technology in search of a guiding philosophy” (Kissinger 2018). Danaher (2016b) calls this problem “the threat of algocracy” (adopting the previous use of ‘algocracy’ from Aneesh 2002 [OIR], 2006). In a similar vein, Cave (2019) stresses that we need a broader societal move towards more “democratic” decision-making to avoid AI being a force that leads to a Kafka-style impenetrable suppression system in public administration and elsewhere. The political angle of this discussion has been stressed by O’Neil in her influential book Weapons of Math Destruction (2016), and by Yeung and Lodge (2019).
plato.stanford.edu | articles
8 minutes Engaged reading (read 06/02/25)
AI is often sold as a way of “freeing up” humans for other, often more meaningful work. Yet connective labor is among the most profoundly meaningful work humans do, and technologists are nonetheless gunning for it. While humans are imperfect and judgmental to be sure, we also know that human attention and care are a source of purpose and dignity, the seeds of belongingness and bedrock of our communities; yet we tuck that knowledge away in service to an industry that contributes to our growing depersonalization. What is at risk is more than an individual or their job, it is our social cohesion—the connections that are a mutual achievement between and among humans.
time.com | articles
‘There is no human relationship when one half of the encounter is a machine,’ writes Allison Pugh.
7 minutes Engaged reading (read 05/21/25)
codastory.com | articles
AI’s prophets speak of the technology with religious fervor. And they expect us all to become believers.
1 minutes Engaged reading (read 06/03/25)
Q: So what happened when you tried to artificially activate the mouse’s neurons, to make it think it was looking at the picture of the black and white bars? Rafael: When we did that, the mouse licked from the spout of juice in exactly the same way as if he was looking at this image, except that he wasn’t. We were putting that image into its brain. The behavior of the mice when we took over its visual perception was identical to when the mouse was actually seeing the real image.
In just a few years, if you have an iPhone in your pocket and are wearing earbuds, you could think about opening a text message, dictating it, and sending it—all without touching a device
It’s already being shown in labs that an implantable brain-computer interface can manage pain for people with chronic pain diseases. By turning off misfiring neurons, you can reduce the pain they feel.But if you can turn off the neurons, you can turn on the neurons.
We identified five areas of concern where neurotechnology could impact human rights:Make highlight private.+The first is the right to mental privacy – ensuring that the content of our brain activity can’t be decoded without consent.The second is the right to our own mental integrity so that no one can change a person’s identity or consciousness.The third is the right to free will – so that our behavior is determined by one’s own volition, not by external influences, to prevent situations like what we did to those mice.The fourth is the right to equal access to neural augmentation. Technology and AI will lead to human augmentation of our mental processes, our memory, our perception, our capabilities. And we think there should be fair and equal access to neural augmentation in the future.And the fifth neuroright is protection from bias and discrimination – safeguarding against interference in mental activity, as neurotechnology could both read and alter brain data, and change the content of people’s mental activity.
Neurorights Foundation
codastory.com | articles
10 minutes Engaged reading (read 05/23/25)
It is of utmost importance that Workday is found liable for the discrimination of its software. Since Workday's AI software has prevented many applications from reaching their employers’ desks, the company can be held accountable for the consequences of their algorithm and should have been more proactive with testing. This includes whether Workday designed and tested its algorithms to mitigate biases, provided sufficient transparency, and adhered to industry best practices and anti-discrimination laws. Just as iTutorGroup was responsible for their discrimination using AI software, Workday should be liable for the discriminatory choices made by the AI hiring software they developed and programmed themselves.
For an org to be held liable for its algorithm would be a first and huge news.
Further liability could be established if Mobley proves that Workday neglected to address known biases or did not offer adequate guidance to its clients on its technology’s legal and ethical use.
The AI Bill of Rights by the White House and the Algorithmic Accountability Act by the EEOC both aim to promote responsible uses of AI and educate the public about the associated risks
culawreview.org | articles
tech law, technology, AI, artificial intelligence, generative AI, employment law
8 minutes Engaged reading (read 05/21/25)
Distorted results can harm organizations and society at large. Here are a few of the more common types of AI bias7.
Algorithm bias:
Sources of biasMake highlight public.+Make highlight public.+ Misinformation can result if the problem or question asked is not fully correct or specific, or if the feedback to the machine learning algorithm does not help guide the search for a solution.Cognitive biasMake highlight public.+: AI technology requires human input, and humans are fallible. Personal bias can seep in without practitioners even realizing it. This can impact either the dataset or model behavior.Confirmation bias: Closely related to cognitive bias, this happens when AI relies too much on pre-existing beliefs or trends in the data—doubling-down on existing biases, and unable to identify new patterns or trends.Exclusion bias: This type of bias occurs when important data is left out of the data being used, often because the developer has failed to see new and important factors.Measurement bias: Measurement bias is caused by incomplete data. This is most often an oversight or lack of preparation that results in the dataset not including the whole population that should be considered. For example, if a college wanted to predict the factors to successful graduation, but included only graduates, the answers would completely miss the factors that cause some to drop out.Out-group homogeneity bias: This is a case of not knowing what one doesn’t know. There is a tendency for people to have a better understanding of ingroup members—the group one belongs to—and to think they are more diverse than outgroup members. The result can be developers creating algorithms that are less capable of distinguishing between individuals who are not part of the majority group in the training data, leading to racial bias, misclassification and incorrect answers.Prejudice bias: Occurs when stereotypes and faulty societal assumptions find their way into the algorithm’s dataset, which inevitably leads to biased results. For example, AI could return results showing that only males are doctors and all nurses are female.Recall bias: This develops during data labeling, where labels are inconsistently applied by subjective observations.Sample/Selection bias: This is a problem when the data used to train the machine learning model isn't large enough, not representative enough or is too incomplete to sufficiently train the system. If all school teachers consulted to train an AI model have the same academic qualifications, then any future teachers considered would need to have identical academic qualifications.Stereotyping bias: This happens when an AI system—usually inadvertently—reinforces harmful stereotypes. For example, a language translation system could associate some languages with certain genders or ethnic stereotypes. McKinsey gives a word of warning about trying to remove prejudice from datasets: “A naive approach is removing protected classes (such as sex or race) from data and deleting the labels that make the algorithm biased. Yet, this approach may not work because removed labels may affect the understanding of the model and your results’ accuracy may get worse.”8
AI governance often includes methods that aim to assess fairness, equity and inclusion. Approaches such as counterfactual fairness identifies bias in a model’s decision making and ensures equitable results, even when sensitive attributes, such as gender, race or sexual orientation are included.
ibm.com | articles
AI bias refers to biased results due to human biases that skew original training data or AI algorithms—leading to distorted and potentially harmful outputs.
11 minutes Engaged reading (read 06/10/25)
Yet Colossus only beat us at one task; according to Bengio, “superhuman” AI will beat us at a “vast array of tasks.” But that assumes being human is to be nothing more than a particularly versatile task-completion machine. Once you accept that devastating reduction of the scope of our humanity, the production of an equivalently versatile task-machine with “superhuman” task performance doesn’t seem so far-fetched; the notion is almost mundane.
We use and understand the term superhuman to mean something very much like us, but better. The fictional Superman is perhaps the best-known English-language articulation of the superhuman idea. Superman is not Earth-born, but he embodies and far exceeds our highest human ideals of physical, intellectual and moral strength. He isn’t superhuman just because he flies; a rocket does that. He isn’t superhuman because he can move heavy things; for this, a forklift will do. Nor is he superhuman because he excels at a “vast array” of such tasks. Instead, he is an aspirational magnification of what we see as most truly human.
The struggle against this reductive and cynical ideology has been hard-fought for a few hundred years thanks to vigorous resistance from labor and human rights movements that have articulated and defended humane, nonmechanical, noneconomic standards for the treatment and valuation of human beings — standards like dignity, justice, autonomy and respect.
Important note for an AI Impact Officer
Let’s start with education. In many countries, the former ideal of a humane process of moral and intellectual formation has been reduced to optimized routines of training young people to mindlessly generate expected test-answer tokens from test-question prompts. Generative AI tools — some of which advertise themselves as “your child’s superhuman tutor” — promise to optimize even a kindergartener’s learning curve. Yet in the U.S., probably the world’s tech-savviest nation, young people’s love of reading is at its lowest levels in decades, while parents’ confidence in education systems is at a historic nadir. What would reclaiming and reviving the humane experience of learning look like? What kind of world might our children build for themselves and future generations if we let them love to learn again, if we taught them how to rediscover and embrace their humane potential? How would that world compare to one built by children who only know how to be an underperforming machine?
future education
noemamag.com | articles
The rhetoric over “superhuman” AI implicitly erases what’s most important about being human.
1 minutes Engaged reading (read 06/02/25)
project-syndicate.org | articles
Slavoj Žižek considers the existential implications of handing over control to superintelligent systems.
20 minutes Engaged reading (read 05/07/25)
carbon labeling that appeals to both systems could be more successful than previous efforts to change consumer habits.
scientificamerican.com | articles
In psychologist Daniel Kahneman's recent book, he reveals the dual systems of your brain, their pitfalls and their power
9 minutes Engaged reading (read 06/11/25)
AI AlignmentAI is written to do tasks effectively and efficiently, but it does not have the abilities of judgment, inference, and understanding the way humans naturally do. This leads to the AI alignment problem: AI alignment is the issue of how we can encode AI systems in a way that is compatible with human moral values. The problem becomes complex when there are multiple values that we want to prioritize in a system. For example, we might want both speed and accuracy out of a system performing a morally relevant task, such as online content moderation. If these values are conflicting to any extent, then it is impossible to maximize for both. AI alignment becomes even more important when the systems operate at a scale where humans cannot feasibly evaluate every decision made to check whether it was performed in a responsible and ethical manner.
The alignment problem has two parts. The first is the technical aspect which focuses on how to formally encode values and principles into AI so that it does what it ought to do in a reliable manner. Cases of unintended negative side effects and reward hacking can result if this is not done properly [2]. The second part of the alignment problem is normative, which asks what moral values or principles, if any, we should encode in AI.
All of these levels interact with each other. Because AI typically originates from the organizational level, often in profit driven corporations, the primary motivation is often simply to make money. However, when put in the context of these other levels, further goals should become visible: 1) AI development should be aligned to individual and familial needs, 2) AI development should align with national interests, and 3) AI development should contribute to human survival and flourishing on the global level.
If we are to make any progress on the normative side of AI alignment, we must consider all levels – individual, organizational, national, and global, and understand how each works together, rather than only aligning one or a few of the parts. Here we have presented a framework for considering these issues.The versatility of the framework means that it can be applied to many other topics, including but not limited to autonomous vehicles, AI-assisted clinical decision support systems, surveillance, and criminal justice tools. In these hotly contested spaces with no clear answers, by breaking them down into these four levels, we are able to see the parts at play in order to create ethical and aligned AI. Perhaps then we can sleep easy knowing we’ll be safe from a Terminator in our distant future.
scu.edu | articles
From social media algorithms, to smart home devices, to semi-autonomous vehicles, AI has found its way into nearly every aspect of our everyday lives.
1 minutes Engaged reading (read 06/09/25)
The question will be simple but perpetual: Person or machine? Every encounter with language, other than in the flesh, will now bring with it that small, consuming test. For some—teachers, professors, journalists—the question of humanity will be urgent and essential. Who made these words? For what purpose? For those who operate in the large bureaucratic apparatus of boilerplate—copywriters, lawyers, advertisers, political strategists—the question will be irrelevant except as a matter of efficiency. How will they use new artificial-intelligence technology to accelerate the production of language that was already mostly automatic? For everyone, the question will now hover, quotidian and cosmic, over words wherever you find them: Who’s there?
"Who's there?" - a great point!
The advancement of generative artificial intelligence is not an advancement toward artificial personhood for a simple, absolute reason: There is no falsifiable thesis of consciousness. You cannot find a researcher who can define, in a testable way, what consciousness is.
Every culture works by reaction and counterreaction. For several hundred years, the education system has focused on teaching children to write like machines, to learn codes of grammar and syntax, to make the correct gestures in the correct places, to remember the systems and to apply them. Now there’s ChatGPT for that. The children who will triumph will be the ones who can write not like machines, but like human beings. That’s an enormously more difficult skill to impart or master than sentence structure. The writing that matters will stride straight down the center of the road to say, Here I am. I am here now. It’s me.
ie, what is education today.
theatlantic.com | articles
Thanks to AI, every written word now comes with a question.
20 minutes Engaged reading (read 05/08/25)
theatlantic.com | articles
Artificial intelligence will redefine some of our deepest assumptions about the makeup of the world around us.
1 minutes Engaged reading (read 06/11/25)
theatlantic.com | articles
Businesses and creators see a new opportunity in the anti-AI movement.
19 minutes Engaged reading (read 05/09/25)
Political theorists sometimes talk about the “resource curse”, where countries with abundant natural resources end up more autocratic and corrupt – Saudi Arabia and the Democratic Republic of the Congo are good examples. The idea is that valuable resources make the state less dependent on its citizens. This, in turn, makes it tempting (and easy) for the state to sideline citizens altogether. The same could happen with the effectively limitless “natural resource” of AI. Why bother investing in education and healthcare when human capital provides worse returns?
Truly something to think about.
Once AI can replace everything that citizens do, there won’t be much pressure for governments to take care of their populations. The brutal truth is that democratic rights arose partly due to economic and military necessity, and to ensure stability. But those won’t count for much when governments are funded by taxes on AIs instead of citizens, and when they too start replacing human employees with AIs, all in the name of quality and efficiency. Even last resorts such as labour strikes or civil unrest will gradually become ineffective against fleets of autonomous police drones and automated surveillance.
The most disturbing possibility is that this might all seem perfectly reasonable to us. The same AI companions that hundreds of thousands of people are already falling for in their current primitive state will be making ultra-persuasive, charming, sophisticated and funny arguments for why our diminishing relevance is actually progress. AI rights will be presented as the next big civil rights cause. The “humanity first” camp will be painted as being on the wrong side of history.
Luddite movement 2.0 may be needed!
If we start seeing signs of a crisis, we need to be able to step in and slow things down, especially in cases where individuals and groups benefit from things that harm society overall.
theguardian.com | articles
The end of civilisation might look less like a war, and more like a love story. Can we avoid being willing participants in our own downfall?
19 minutes Engaged reading (read 05/20/25)
theguardian.com | articles
The long read: The rise of artificial intelligence has sparked a panic about computers gaining power over humankind. But the real threat comes from falling for the hype
1 minutes Engaged reading (read 06/03/25)
The main difficulty, Stanford argues, is that Gates has “misdiagnosed the core problem”. The real issue isn’t the technologies themselves but the “inequitable social structures and the policies and politics that support them.” Varoufakis makes a similar point when he says that a UBD can only work when it’s “coupled with stronger labour rights and a decent living wage.”
theguardian.com | articles
Bill Gates’s idea of taxing robots sounds like common sense but we need more practicable ways of redistributing wealth
23 minutes Engaged reading (read 05/21/25)
Viewed from this perspective, the behaviour of the digital giants looks rather different from the roseate hallucinations of Wired magazine. What one sees instead is a colonising ruthlessness of which John D Rockefeller would have been proud. First of all there was the arrogant appropriation of users’ behavioural data – viewed as a free resource, there for the taking. Then the use of patented methods to extract or infer data even when users had explicitly denied permission, followed by the use of technologies that were opaque by design and fostered user ignorance.And, of course, there is also the fact that the entire project was conducted in what was effectively lawless – or at any rate law-free – territory. Thus Google decided that it would digitise and store every book ever printed, regardless of copyright issues. Or that it would photograph every street and house on the planet without asking anyone’s permission. Facebook launched its infamous “beacons”, which reported a user’s online activities and published them to others’ news feeds without the knowledge of the user. And so on, in accordance with the disrupter’s mantra that “it is easier to ask for forgiveness than for permission”.
whereas most democratic societies have at least some degree of oversight of state surveillance, we currently have almost no regulatory oversight of its privatised counterpart.
Once we searched Google, but now Google searches us. Once we thought of digital services as free, but now surveillance capitalists think of us as free.
General Motors employed more people during the height of the Great Depression than either Google or Facebook employs at their heights of market capitalisation
theguardian.com | articles
Shoshana Zuboff’s new book is a chilling exposé of the business model that underpins the digital world. Observer tech columnist John Naughton explains the importance of Zuboff’s work and asks the author 10 key questions
2 minutes Engaged reading (read 06/11/25)
So what does all this mean for our machine-filled future? Are we destined to be manipulated by socially sophisticated bots that know how to push our buttons? It’s certainly something to be aware of, says Aike Horstmann, a PhD student at the University of Duisburg-Essen who led the new study. But, she says, it’s not a huge threat.
theverge.com | articles
A growing body of evidence shows that humans readily react to social cues — even when they come from machines. A new study shows humans were loath to turn a robot off when it begged for its life.
7 minutes Engaged reading (read 05/20/25)
This is a common claim, but I think it’s very misleading. Our use of an AI system is not tantamount to consent. By “consent” we typically mean informed consent, not consent born of ignorance or coercion.
But that is different from the attempt to build a general-purpose reasoning machine that outstrips humans, a “magic intelligence in the sky.” While plenty of people do want narrow AI, polling shows that most Americans do not want AGI. Which brings us to …
This is a myth. In fact, there are lots of technologies that we’ve decided not to build, or that we’ve built but placed very tight restrictions on. Just think of human cloning or human germline modification. The recombinant DNA researchers behind the Asilomar Conference of 1975 famously organized a moratorium on certain experiments. We are, notably, still not cloning humans.
As the old Roman proverb goes: What touches all should be decided by all.
vox.com | articles
OpenAI’s Sam Altman is building “magic intelligence in the sky.” Nobody wants this.
4 minutes Engaged reading (read 06/11/25)
wired.com | articles
Watching the incineration of Elmo made me feel vaguely uncomfortable. Part of me wanted to laugh, but I also felt sick about what was going on. Photo: Mauricio Alejo During the 20 months that Fisher-Price spent developing the innards and software of its latest animatronic Elmo, engineers gave the project the code name Elmo Live. And \[…\]
1 minutes Engaged reading (read 06/12/25)
showing superior persuasion abilities of GPT-4 (compared with ordinary humans) when given Facebook pages of the person to be persuaded. What if such an AI system was further fine-tuned on millions of interactions teaching the AI how to be really efficient to make us change our mind on anything.
yoshuabengio.org | articles
9 minutes Normal reading (read 05/05/25)
"Rather than thinking about what I should do, they suggest that I consider what I would think if I were advising a group on strangers about what they should do"
youtube.com | videos
Our next stop in our tour of the ethical lay of the land is utilitarianism. With a little help from Batman, Hank explains the principle of utility, and the d...
11 minutes Normal reading (read 06/11/25)
youtube.com | videos
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build su...
7 minutes Normal reading (read 05/20/25)
youtube.com | videos
The tech world has a secret: we're using Deep Learning AI without fully understanding how it works. And yet many of us interact with it every single day, som...
15 minutes Normal reading (read 05/20/25)
youtube.com | videos
A handful of people working at a handful of tech companies steer the thoughts of billions of people every day, says design thinker Tristan Harris. From Faceb...
8 minutes Normal reading (read 05/23/25)
youtube.com | videos
As companies introduce AI into the workplace to increase productivity, an uncomfortable paradox is emerging: people are often responsible for training the ve...
14 minutes Normal reading (read 06/11/25)
youtube.com | videos
With quick wit and sharp insight, writer Jeanette Winterson lays out a vision of the future where human and machine intelligence meld -- forming what she cal...
13 minutes Normal reading (read 06/02/25)
youtube.com | videos
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. ...
11 minutes Normal reading (read 05/26/25)
youtube.com | videos
In this video, legal ethicist Trisha Rich examines many of the key ethical issues and duties that arise from using generative AI in legal practice, such as c...
13 minutes Normal reading (read 05/26/25)
youtube.com | videos
As robots become more fully interactive with humans during the performance of our day- to-day activities, the role of trust and bias must be examined more ca...
2 minutes Normal reading (read 05/20/25)
youtube.com | videos
What does transhumanism mean? What is the definition of transhumanism? Philosopher Julian Baggini explains the radical vision of transhumanism - where humans...
8 minutes Normal reading (read 05/21/25)
@6:49 WHO codes matters HOW we code matters WHY we code matters DEI IN CODING MATTERS
youtube.com | videos
MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn't detect her face -- because the peop...
51 minutes Normal reading (read 06/11/25)
Counterfeit people influencing society = not good - preventing this is essential for society's trust and security.
youtube.com | videos
We speak with Prof. Daniel Dennett about concerns about AI being used to create artificial colleagues, he argues that preventing counterfeit AI individuals i...
7 minutes Normal reading (read 05/06/25)
youtube.com | videos
Watch our most recent video to learn more about the exciting field of artificial intelligence! We dissect the Chinese Room Argument by John Searle, a provoca...
14 minutes Normal reading (read 05/07/25)
youtube.com | videos
Our consciousness is a fundamental aspect of our existence, says philosopher David Chalmers: "There's nothing we know about more directly.... but at the same...
17 minutes Normal reading (read 05/07/25)
youtube.com | videos
The advancements in artificial intelligence are feeding ever-growing fears they may challenge human intelligence. They have given rise to generative models l...
4 minutes Normal reading (read 05/05/25)
youtube.com | videos
View full lesson: http://ed.ted.com/lessons/would-you-sacrifice-one-person-to-save-five-eleanor-nelsenImagine you’re watching a runaway trolley barreling dow...