Created

Last reply

Replies

1k

Views

21.4k

Users

2

Likes

5

Frequent Posters

vagabond_2026 thumbnail
19th Anniversary Thumbnail Visit Streak 1000 Thumbnail + 7
Posted: 3 months ago

https://edition.cnn.com/2025/12/09/science/voyager-1-light-day-earth

Voyager 1 will reach one light-day from Earth in 2026. Here’s what that means

vagabond_2026 thumbnail
19th Anniversary Thumbnail Visit Streak 1000 Thumbnail + 7
Posted: 3 months ago

https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/?utm_medium=email&utm_source=ten_tabs&utm_campaign=FIREFOX-EDITORIAL-TENTABS-2025_12_16&position=2&category=fascinating_stories&scheduled_corpus_item_id=b4eec731-ad59-4703-8f3f-f0ef94032e6f&url=https%3A%2F%2Fwww.technologyreview.com%2F2025%2F12%2F15%2F1129174%2Fthe-great-ai-hype-correction-of-2025%2F

Artificial intelligence

The great AI hype correction of 2025

Four ways to think about this year's reckoning

by Will Douglas Heaven

Some disillusionment was inevitable. When OpenAI released a free web app called ChatGPT in late 2022, it changed the course of an entire industry—and several world economies. Millions of people started talking to their computers, and their computers started talking back. We were enchanted, and we expected more.

We got it. Technology companies scrambled to stay ahead, putting out rival products that outdid one another with each new release: voice, images, video. With nonstop one-upmanship, AI companies have presented each new product drop as a major breakthrough, reinforcing a widespread faith that this technology would just keep getting better. Boosters told us that progress was exponential. They posted charts plotting how far we’d come since last year’s models: Look how the line goes up! Generative AI could do anything, it seemed.

Well, 2025 has been a year of reckoning.

This story is part of MIT Technology Review’s Hype Correction package, a series that resets expectations about what AI is, what it makes possible, and where we go next.

For a start, the heads of the top AI companies made promises they couldn’t keep. They told us that generative AI would replace the white-collar workforce, bring about an age of abundance, make scientific discoveries, and help find new cures for disease. FOMO across the world’s economies, at least in the Global North, made CEOs tear up their playbooks and try to get in on the action.

That’s when the shine started to come off. Though the technology may have been billed as a universal multitool that could revamp outdated business processes and cut costs, a number of studies published this year suggest that firms are failing to make the AI pixie dust work its magic. Surveys and trackers from a range of sources, including the US Census Bureau and Stanford University, have found that business uptake of AI tools is stalling. And when the tools do get tried out, many projects stay stuck in the pilot stage. Without broad buy-in across the economy it is not clear how the big AI companies will ever recoup the incredible amounts they've already spent in this race.

At the same time, updates to the core technology are no longer the step changes they once were.

The highest-profile example of this was the botched launch of GPT-5 in August. Here was OpenAI, the firm that had ignited (and to a large extent sustained) the current boom, set to release a brand-new generation of its technology. OpenAI had been hyping GPT-5 for months: “PhD-level expert in anything,” CEO Sam Altman crowed. On another occasion Altman posted, without comment, an image of the Death Star from Star Wars, which OpenAI stans took to be a symbol of ultimate power: Coming soon! Expectations were huge.

And yet, when it landed, GPT-5 seemed to be—more of the same? What followed was the biggest vibe shift since ChatGPT first appeared three years ago. “The era of boundary-breaking advancements is over,” Yannic Kilcher, an AI researcher and popular YouTuber, announced in a video posted two days after GPT-5 came out: “AGI is not coming. It seems very much that we’re in the Samsung Galaxy era of LLMs.”

A lot of people (me included) have made the analogy with phones. For a decade or so, smartphones were the most exciting consumer tech in the world. Today, new products drop from Apple or Samsung with little fanfare. While superfans pore over small upgrades, to most people this year’s iPhone now looks and feels a lot like last year’s iPhone. Is that where we are with generative AI? And is it a problem? Sure, smartphones have become the new normal. But they changed the way the world works, too.

To be clear, the last few years have been filled with genuine “Wow” moments, from the stunning leaps in the quality of video generation models to the problem-solving chops of so-called reasoning models to the world-class competition wins of the latest coding and math models. But this remarkable technology is only a few years old, and in many ways it is still experimental. Its successes come with big caveats.

Perhaps we need to readjust our expectations.

The big reset

Let’s be careful here: The pendulum from hype to anti-hype can swing too far. It would be rash to dismiss this technology just because it has been oversold. The knee-jerk response when AI fails to live up to its hype is to say that progress has hit a wall. But that misunderstands how research and innovation in tech work. Progress has always moved in fits and starts. There are ways over, around, and under walls.

Take a step back from the GPT-5 launch. It came hot on the heels of a series of remarkable models that OpenAI had shipped in the previous months, including o1 and o3 (first-of-their-kind reasoning models that introduced the industry to a whole new paradigm) and Sora 2, which raised the bar for video generation once again. That doesn’t sound like hitting a wall to me.

AI is really good! Look at Nano Banana Pro, the new image generation model from Google DeepMind that can turn a book chapter into an infographic, and much more. It’s just there—for free—on your phone.

And yet you can’t help but wonder: When the wow factor is gone, what’s left? How will we view this technology a year or five from now? Will we think it was worth the colossal costs, both financial and environmental?

With that in mind, here are four ways to think about the state of AI at the end of 2025: The start of a much-needed hype correction.

01: LLMs are not everything

In some ways, it is the hype around large language models, not AI as a whole, that needs correcting. It has become obvious that LLMs are not the doorway to artificial general intelligence, or AGI, a hypothetical technology that some insist will one day be able to do any (cognitive) task a human can.

Even an AGI evangelist like Ilya Sutskever, chief scientist and cofounder at the AI startup Safe Superintelligence and former chief scientist and cofounder at OpenAI, now highlights the limitations of LLMs, a technology he had a huge hand in creating. LLMs are very good at learning how to do a lot of specific tasks, but they do not seem to learn the principles behind those tasks, Sutskever said in an interview with Dwarkesh Patel in November.

It’s the difference between learning how to solve a thousand different algebra problems and learning how to solve any algebra problem. “The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people,” Sutskever said.

t’s easy to imagine that LLMs can do anything because their use of language is so compelling. It is astonishing how well this technology can mimic the way people write and speak. And we are hardwired to see intelligence in things that behave in certain ways—whether it’s there or not. In other words, we have built machines with humanlike behavior and cannot resist seeing a humanlike mind behind them.

That’s understandable. LLMs have been part of mainstream life for only a few years. But in that time, marketers have preyed on our shaky sense of what the technology can really do, pumping up expectations and turbocharging the hype. As we live with this technology and come to understand it better, those expectations should fall back down to earth.

02: AI is not a quick fix to all your problems

In July, researchers at MIT published a study that became a tentpole talking point in the disillusionment camp. The headline result was that a whopping 95% of businesses that had tried using AI had found zero value in it.

The general thrust of that claim was echoed by other research, too. In November, a study by researchers at Upwork, a company that runs an online marketplace for freelancers, found that agents powered by top LLMs from OpenAI, Google DeepMind, and Anthropic failed to complete many straightforward workplace tasks by themselves.

This is miles off Altman’s prediction: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” he wrote on his personal blog in January.

But what gets missed in that MIT study is that the researchers’ measure of success was pretty narrow. That 95% failure rate accounts for companies that had tried to implement bespoke AI systems but had not yet scaled them beyond the pilot stage after six months. It shouldn’t be too surprising that a lot of experiments with experimental technology don’t pan out straight away.

That number also does not include the use of LLMs by employees outside of official pilots. The MIT researchers found that around 90% of the companies they surveyed had a kind of AI shadow economy where workers were using personal chatbot accounts. But the value of that shadow economy was not measured.

When the Upwork study looked at how well agents completed tasks together with people who knew what they were doing, success rates shot up. The takeaway seems to be that a lot of people are figuring out for themselves how AI might help them with their jobs.

That fits with something the AI researcher and influencer (and coiner of the term “vibe coding”) Andrej Karpathy has noted: Chatbots are better than the average human at a lot of different things (think of giving legal advice, fixing bugs, doing high school math), but they are not better than an expert human. Karpathy suggests this may be why chatbots have proved popular with individual consumers, helping non-experts with everyday questions and tasks, but they have not upended the economy, which would require outperforming skilled employees at their jobs.

That may change. For now, don’t be surprised that AI has not (yet) had the impact on jobs that boosters said it would. AI is not a quick fix, and it cannot replace humans. But there’s a lot to play for. The ways in which AI could be integrated into everyday workflows and business pipelines are still being tried out.

03: Are we in a bubble? (If so, what kind of bubble?)

If AI is a bubble, is it like the subprime mortgage bubble of 2008 or the internet bubble of 2000? Because there’s a big difference.

The subprime bubble wiped out a big part of the economy, because when it burst it left nothing behind except debt and overvalued real estate. The dot-com bubble wiped out a lot of companies, which sent ripples across the world, but it left behind the infant internet—an international network of cables and a handful of startups, like Google and Amazon, that became the tech giants of today.

Then again, maybe we’re in a bubble unlike either of those. After all, there’s no real business model for LLMs right now. We don’t yet know what the killer app will be, or if there will even be one.

And many economists are concerned about the unprecedented amounts of money being sunk into the infrastructure required to build capacity and serve the projected demand. But what if that demand doesn’t materialize? Add to that the weird circularity of many of those deals—with Nvidia paying OpenAI to pay Nvidia, and so on—and it’s no surprise everybody’s got a different take on what’s coming.

Some investors remain sanguine. In an interview with the Technology Business Programming Network podcast in November, Glenn Hutchins, cofounder of Silver Lake Partners, a major international private equity firm, gave a few reasons not to worry. “Every one of these data centers—almost all of them—has a solvent counterparty that is contracted to take all the output they’re built to suit,” he said. In other words, it’s not a case of “Build it and they’ll come”—the customers are already locked in.

And, he pointed out, one of the biggest of those solvent counterparties is Microsoft. “Microsoft has the world’s best credit rating,” Hutchins said. “If you sign a deal with Microsoft to take the output from your data center, Satya is good for it.”

Many CEOs will be looking back at the dot-com bubble and trying to learn its lessons. Here’s one way to see it: The companies that went bust back then didn’t have the money to last the distance. Those that survived the crash thrived.

With that lesson in mind, AI companies today are trying to pay their way through what may or may not be a bubble. Stay in the race; don’t get left behind. Even so, it’s a desperate gamble.

But there’s another lesson too. Companies that might look like sideshows can turn into unicorns fast. Take Synthesia, which makes avatar generation tools for businesses. Nathan Benaich, cofounder of the VC firm Air Street Capital, admits that when he first heard about the company a few years ago, back when fear of deepfakes was rife, he wasn’t sure what its tech was for and thought there was no market for it.

“We didn’t know who would pay for lip-synching and voice cloning,” he says. “Turns out there’s a lot of people who wanted to pay for it.” Synthesia now has around 55,000 corporate customers and brings in around $150 million a year. In October, the company was valued at $4 billion.

04: ChatGPT was not the beginning, and it won’t be the end

ChatGPT was the culmination of a decade’s worth of progress in deep learning, the technology that underpins all of modern AI. The seeds of deep learning itself were planted in the 1980s. The field as a whole goes back at least to the 1950s. If progress is measured against that backdrop, generative AI has barely got going.

Meanwhile, research is at a fever pitch. There are more high-quality submissions to the world’s major AI conferences than ever before. This year, organizers of some of those conferences resorted to turning down papers that reviewers had already approved, just to manage numbers. (At the same time, preprint servers like arXiv have been flooded with AI-generated research slop.)

“It’s back to the age of research again,” Sutskever said in that Dwarkesh interview, talking about the current bottleneck with LLMs. That’s not a setback; that’s the start of something new.

“There’s always a lot of hype beasts,” says Benaich. But he thinks there’s an upside to that: Hype attracts the money and talent needed to make real progress. “You know, it was only like two or three years ago that the people who built these models were basically research nerds that just happened on something that kind of worked,” he says. “Now everybody who’s good at anything in technology is working on this.”

Where do we go from here?

The relentless hype hasn’t come just from companies drumming up business for their vastly expensive new technologies. There’s a large cohort of people—inside and outside the industry—who want to believe in the promise of machines that can read, write, and think. It’s a wild decades-old dream.

But the hype was never sustainable—and that’s a good thing. We now have a chance to reset expectations and see this technology for what it really is—assess its true capabilities, understand its flaws, and take the time to learn how to apply it in valuable (and beneficial) ways. “We’re still trying to figure out how to invoke certain behaviors from this insanely high-dimensional black box of information and skills,” says Benaich.

This hype correction was long overdue. But know that AI isn’t going anywhere. We don’t even fully understand what we’ve built so far, let alone what’s coming next.

by Will Douglas Heaven

vagabond_2026 thumbnail
19th Anniversary Thumbnail Visit Streak 1000 Thumbnail + 7
Posted: 3 months ago

https://www.yahoo.com/news/articles/national-geographic-unveiled-pictures-7-190813786.html

National Geographic unveiled its Pictures of the Year. Here are 7 of the most striking wildlife photos.

vagabond_2026 thumbnail
19th Anniversary Thumbnail Visit Streak 1000 Thumbnail + 7
Posted: 3 months ago

https://www.noemamag.com/the-politics-of-superintelligence/?position=4&category=fascinating_stories&scheduled_corpus_item_id=7d55ce36-6e9e-4a9d-afa1-3989d5ac9d81&url=https%3A%2F%2Fwww.noemamag.com%2Fthe-politics-of-superintelligence

Today’s tech “prophets” push a narrative that God-like artificial superintelligence is inevitable, and only they can ensure humanity’s safety from their creations.

vagabond_2026 thumbnail
19th Anniversary Thumbnail Visit Streak 1000 Thumbnail + 7
Posted: 3 months ago

https://nautil.us/the-emerging-science-of-being-hangry-1254781/?utm_medium=email&utm_source=ten_tabs&utm_campaign=FIREFOX-EDITORIAL-TENTABS-2025_12_17&position=5&category=fascinating_stories&scheduled_corpus_item_id=30cf51f4-ac6d-4b57-86a4-7053859f6567&url=https%3A%2F%2Fnautil.us%2Fthe-emerging-science-of-being-hangry-1254781%2F

The Emerging Science of Being Hangry

Your ability to tune into your body’s internal signals shapes hunger-driven mood swings

By Kristen French

When our stomachs are empty, we may get cranky or snap at the people around us. We’re hangry, we may exclaim, to excuse our mischief. But what’s really going on when this storm of hunger and anger hits?

The term hangry was likely first used in 1918 (in a letter by journalist and writer Arthur Ransome), but it has swept popular culture in recent decades, featured in everything from a Snicker’s ad to an Olympic snowboarder’s viral tweet.

Scientists are only now beginning to catch up. One thing they haven’t fully understood is how this close relationship between anger and hunger works in the brain: Do drops in glucose levels in the blood trigger changes in mood through subconscious or conscious processes?

To answer this question, a team of German researchers recently tracked how glucose levels, feelings of hunger, and mood interacted in 90 healthy adults over a period of four weeks. The scientists gave participants glucose monitors to wear and asked them to regularly answer questions about hunger, satiety, and mood via a smartphone app.

The results suggest that hangriness is indeed real: The more hungry participants were, the worse their mood.

Read more: “You Are What Your Ancestors Didn’t Eat”

But surprisingly, the researchers found that hunger-related shifts in mood depended on conscious sensing of the body’s internal state, not unconscious processes. In other words, it’s all about how your brain interprets the signals coming from your gut.

“When glucose levels drop, mood also deteriorates. But this effect only occurs because people then feel hungrier,” explained co-author Kristin Kaduk, postdoctoral researcher at the University Hospital for Psychiatry and Psychotherapy in Tübingen, Germany, in a statement. “In other words, it is not the glucose level itself that raises or lowers mood, but rather how strongly we consciously perceive this lack of energy.”

Separately, Kaduk and her colleagues also found that the participants who were better able to read their own bodily signals, a sense known as interoception, seemed to experience fewer mood swings, though not higher average mood. Differences in metabolic health—such as body mass index and insulin resistance—didn’t play much of a role in this pattern. They published their results in eBioMedicine.

The researchers suggest that the study may point to a larger relationship between metabolic sensing, interoceptive accuracy, and mood disorders. The human body needs food to survive, of course. Glucose supplies energy for essential processes, including mental health. And links between metabolic issues and mood disorders have often cropped up in the research. Poor interoception has also been linked to higher body mass index.

“Many diseases such as depression or obesity are associated with altered metabolic processes,” said Nils Kroemer, co-author of the study, psychiatrist and research professor of medical psychology at the University of Bonn. “A better understanding of how body perception and mood are related can help improve therapeutic approaches in the long term—for example, through targeted training of interoception or non-invasive stimulation of the vagus nerve, which connects the organs to the brain and influences interoception.”

Interoception is something you can get better at, through mindfulness, deep breathing, body scans, and efforts to link sensations to emotions, research suggests. Perhaps with a little consistent practice, you can head off your next episode of hangriness.

vagabond_2026 thumbnail
19th Anniversary Thumbnail Visit Streak 1000 Thumbnail + 7
Posted: 3 months ago

https://psyche.co/ideas/tolerance-isnt-just-nice-its-a-civic-virtue-we-all-can-build

Tolerance isn’t just nice, it’s a civic virtue we all can build

by Kiran Kumbhar, historian

At a time of rising intolerance, the century-old work of C E M Joad reminds us what tolerance really is and why we need it

Growing up, I could effortlessly recite all the lines of all the songs from the movie Saagar (1985), because for a long stretch of several weeks, day after day, I couldn’t stop listening to those R D Burman tunes. I had no choice, actually. It was the mid-1990s, when we lived in a town several hours south of Mumbai in a working-class housing complex similar to tenements and known locally as a chawl. Chawls typically housed dozens of families in homes consisting of two small rooms wherein resided each family’s entire universe of people and things. A wide array of activities were communal and public – including public, shared toilets on the one hand and communal music on the other. At some point in my childhood, a neighbour got weirdly obsessed with those fine songs from Saagar, and every day around 10 in the morning she would switch on her tape recorder and play ‘O Maria’ and others at full volume on her very loud speakers. The dose makes the poison even in the case of music: the most melodious of tunes, if heard for too long or repeatedly, will turn toxic. No wonder mornings became quite an aural ordeal during that time in our busy, bustling community.

Chawls are notorious for their crowded living conditions, and in such an intimate group of so many people, there was inevitably a mingling of vastly different personalities from very diverse religious, regional, caste and sub-caste backgrounds. No wonder conflict was always around the corner. However, it was never right in front. My mother would often be frustrated with the cacophony of the Saagar songs, but she never barged into the neighbours’ house to yell at them. When there was a wedding at someone’s home, we heard shrill music and commotion all day and well into the night, but everyone more or less took it in their stride. Smells of all kinds of different foods – often including the pungent air of bombil fish being fried – constantly suffused the air, but no one complained. We were definitely under no illusion that we lived in some paradisiacal land of freedom, but we were still aware that there was a wide spectrum of communal ideas and activities considered acceptable. Put another way: there was, in this and in other similar parts of India – though certainly not everywhere – an abundance of the civic virtue of tolerance (or ‘toleration’, as it is better known in philosophy).

Of course, today, in India and indeed across the world, intolerance reigns, having become institutionalised into an industry, with more or less standardised digital toolkits deployed every day by influential groups to express carefully coordinated and meticulously manipulated offence. I live in the United States now and, among other things, it is disgusting to see how the American state has transformed intolerance against people from Central and South America into a horrible everyday spectacle. Back in India, the state and its rank-and-file supporters have taken intolerance to new heights particularly (and ironically) after many writers and other artists returned their state honours in 2015 to protest ‘rising intolerance’ in the country.

In more recent years, India’s politicians and their street supporters have increasingly and violently attacked the rights of non-Hindu people to pray or otherwise express their religiosity publicly, and have added non-vegetarian food to their list of ‘un-Indian’ things worthy of bans. Every few weeks, one hears of a governmental entity banning the sale of meat and fish, even eggs, for randomly chosen Hindu religious events. As someone who grew up in a community and town where almost every Hindu enjoyed eating meat and fish, frequently as part of their religiosity, I feel both amusement and revulsion at the idiotic arrogance of India’s current politicians. Overall, people with power around the world have indicated that they are not interested in humanity’s collective social battles against the age-old ills of hatred and intolerance and instead will work to normalise, valorise and commercialise those ills in every aspect of public life.

Black and white photo of a man in a suit seated in a dimly lit room with a framed painting and a large pillar behind him.

In these times of upside-down morality, I find myself going back to the work of the British philosopher C E M Joad, especially his short book The Story of Civilization (1931). I first read it in middle school (in my chawl home) and have re-read it several times as an adult, finding it more and more enlightening with every reading.

Joad might mock the hypocrisy of self-styled free speech ‘absolutists’ of yesterday

Writing in Britain in the treacherous lull after the First World War, Joad had much to say about the evils of intolerance: ‘a great deal of the misery of mankind in the past has sprung from people being unwilling to tolerate other people thinking differently from themselves.’ Referring to what he considered improved degrees of toleration in British and European society at large in the early 20th century, he wrote that:

[T]his toleration … has come very gradually, and it has only been won after a hard fight. The fight has been not so much to let people think what they liked – obviously you couldn’t stop them doing that – as to let them write and say what they thought. And the permission has only been given very gradually.

Considering that Joad was writing this at a time when officials in British colonies in the Global South were regularly suppressing freedom fighters and their ideas, his optimistic note carries a jarring twang in it. Nevertheless, Joad’s philosophical arguments are worth taking seriously. His definition of tolerance is robust and comprehensive:

A tolerant person is one who does not interfere with other people, even if he thinks they are wrong, but is prepared to let them think what they like and say what they think. If he thinks they are wrong, he may try to persuade them to believe differently, but he will not try to force them.

If Joad were to time-travel to the present day, he might hesitate to consider the contemporary world to be ‘civilised’, which to him meant ‘making and liking beautiful things, thinking freely, and living rightly and maintaining justice equally between [hu]man and [hu]man’. He might find disgusting the fact that many of us do not believe toleration, and its distant cousin empathy, to be important and necessary virtues. He might mock the hypocrisy of self-styled free speech ‘absolutists’ of yesterday who have become the intolerants-in-chief of today. He would most certainly be stunned by the absurd twin-demand from powerful elites that people must be tolerant toward lies and hateful speech, and be intolerant toward truth-telling and empathy for oppressed people.

In his autobiography The Story of My Experiments with Truth (1929), Mohandas Gandhi refers to a haughty custom in 1890s South Africa: when visiting Europeans, Indians had to take off their turban or any similar Indian headgear, otherwise the Europeans might feel offended. ‘It has always been a mystery to me,’ Gandhi wrote while reminiscing about this experience, ‘how men can feel themselves honoured by the humiliation of their fellow-beings.’ While Joad – writing at roughly the same time as Gandhi wrote his autobiography in the late 1920s – unfortunately did not account for British colonial oppression in his generally optimistic book, he did have something to say about abuse of power and about the intolerance of people and nations who abused power.

The ‘greats’ of European history did not deserve any mention in his book

Joad found discomfiting the widely prevalent belief that a great country or nation is one that has beaten other countries in battle and ruled over them: ‘It is just possible they are [the greatest], but they are not the most civilised.’ As a resolute pacifist, and perhaps also having witnessed the horrors of militarism in the First World War, Joad decried the twisted concept that countries and people need to use violence to bring about peace:

[C]ivilised peoples ought to be able to find some way of settling their disputes other than by seeing which side can kill off the greater number of the other side, and then saying that that side which has killed most has won. And not only has won, but, because it has won, has been in the right.

To Joad, the ‘greats’ of European history like Caesar, Napoleon and Alexander did not deserve any mention in his book because they were simply ‘men who were specially successful in getting multitudes of other men killed ’. He explains: ‘Tyranny has nothing to do with civilisation …’

Joad’s remarkable ruminations on power and tyranny provide an important caveat and lend more balance to his views on tolerance, which might seem unpractically absolutist at first, a problem on which Karl Popper elaborated in his ‘paradox of tolerance’. Tolerance, as Joad describes it, is an essential civic virtue, but does that mean we should be tolerant of lies, or toward speech that calls for violence against someone, or speech that justifies the murder of children? And say we decide we will not tolerate lies and calls for violence: how do we go about implementing such exceptions to the rule? Indeed, what has ailed many societies is not the task of delineating exceptions to tolerance and free speech, which is not that hard – for example, ‘we will not tolerate any calls for harm against any religious group or its followers’ – but the temptation to enforce an asymmetric and unequal application of such exceptions: as when calls for harm against X group are prohibited and strictly punished, while those against Y group are allowed more or less freely with little to no punishment.

When I look back to my old neighbourhood now as an adult, I realise that we had found a remarkably practical resolution to such dilemmas: an unapologetic, wide-ranging practice of freedom and tolerance with minor limits that were well recognised but tacit. For example, when it came to public celebrations of religious festivals, no asymmetry existed: Hindu, Muslim, and Christian families all boldly flaunted and celebrated religious events and freely used public spaces and pathways. There might have been a few families who did not like any kind of public religious celebration, but they exercised tolerance, and so did everyone else when families of a different religion were celebrating. At the same time, there was a mutual understanding that one ‘must not take things too far’: eg, weddings, loud music and other celebrations that went into the night still wrapped up by about 2 am; or while foul language was common, no one used slurs based on religious and caste identities, and if anyone did, there was always someone around to publicly reprimand them. These minor limits on free speech and free expression existed because the adults around me had apparently realised something of which many elites – dealing only in abstracts, without any experience of the streets and the trenches – are totally ignorant: that in any human society, individual freedom will always be as much a social matter as an individual matter; that it is on the bedrock of social toleration that free speech stands tall as an individual privilege.

In most parts of the world today, we don’t think twice before listing freedom as an essential value and as a fundamental right. However, we often forget the other side of the civic coin – toleration – a value so essential that philosophers like Joad considered it indispensable to the ‘story of civilisation’. It is not too late to remind ourselves that, in the absence of tolerance, societies begin to crumble and break down and, before we can even notice it, freedom slips through the cracks.

vagabond_2026 thumbnail
19th Anniversary Thumbnail Visit Streak 1000 Thumbnail + 7
Posted: 3 months ago

https://psyche.co/ideas/curious-about-a-digital-detox-heres-what-you-should-know

Curious about a digital ‘detox’? Here’s what you should know

by Kostadin Kushlev, psychology professor

For many who are chronically connected, a break from tech sounds appealing. Research is uncovering when and how it helps

In the third season of the hit TV show The White Lotus, the protagonists arrive at an exclusive resort where guests are asked to turn off and turn over their phones; that is, to engage in a digital ‘detox’. For many of us, cutting ourselves off from our devices completely might seem like an extraordinary thing to do. But even outside the fictional world of The White Lotus, the idea of taking a break from technology to ‘reset’ the brain has been gaining popularity. People can now detox from technology by signing up for fancy weekend retreats or week-long getaways, or by downloading a growing number of apps that ironically claim to help you quit other apps. It’s reasonable to wonder: does digital detoxing actually work, or is it simply another health trend that distracts us from our real problems?

Before we answer that question, let’s define what a digital detox is. To most people, it likely means giving up all digital technology for some period of time. But it can also include partial abstention from specific digital technologies or features, such as giving up social media, or a specific overused app, or even just silencing notifications. There are various reasons someone might have for engaging in a digital detox, but people seem to be at least partly motivated by a desire to break bad habits, reclaim control over their attention, and spend less time in front of a screen and more time engaging in more productive activities.

The growing demand for digital detoxes does not necessarily mean that they work. To be sure, we can find plenty of positive testimonials online – from people giving five-star reviews to a digital detox app to others swearing by the tech-free retreat they attended. While such anecdotal evidence can be persuasive, it can also be biased. For example, a person who spent a lot of money on a digital detox intervention might be motivated to think it was money well spent. To examine whether digital detoxing helps, and which approaches are really worth our time and money, we need scientific evidence, preferably based on well-controlled experimental studies. So, what do those tell us?

On the basic question of whether digital detoxes work, the simple answer is: ‘Yes, they can.’ While more evidence is needed, some experimental studies now show that taking a break from social media can have positive effects on mental health. Similarly, one study has shown that reducing the number and frequency of notifications can reduce stress and boost wellbeing. And a handful of studies have identified some benefits of reducing or completely giving up the use of smartphones for a period of time. Notably, the evidence for these effects comes from prospective field experiments, in which participants are randomly assigned either to some kind of digital detox or to a control condition. These types of studies are considered the gold standard for providing evidence that an intervention has genuine effects.

Given these promising results, it is noteworthy that there is no evidence that brief and total digital detoxes work. Thus, it’s not clear that no-technology weekend retreats are worth the money and hype. In most of the research, people experience benefits from simply giving up a specific app or just reducing their phone use rather than completely giving up all digital devices. These partial detoxes last anywhere from one week to about a month. Importantly, the effects seem to be more reliable when people make a change for two or more weeks. (The studies that have found no benefits tend to be those where digital detoxing is practised for less than a week.)

When spending time with other people, you can’t go wrong with taking a break from your phone

Another reason to be sceptical of fancy digital-detox weekends is that partially reducing technology use seems to have similar, if not stronger, effects compared with complete abstinence. For example, in one large study in Germany, people assigned to reduce their smartphone use to one hour a day for a week experienced similar improvements in wellbeing and mental health as people assigned to completely give up using their phones. Those benefits also persisted for longer in the reduction-only group, perhaps because a reduction in use is easier to maintain than complete abstinence.

In short, the existing evidence suggests that rather than completely abstaining from tech use for a weekend, smaller changes in digital screen time – giving up a problematic app, reducing daily phone use – might be a more productive and sustainable way to practise digital detoxing. That being said, when you’re spending time with other people, you can’t really go wrong with taking a break from your phone. Whether it’s friends sharing a meal or parents visiting a museum with their kids, my colleagues and I have found that people engaging in activities together feel better when they set their phones aside.

Of course, a particular intervention (such as a daily limit on phone use) that is effective on average might not be effective for every individual. In other words, if you’re interested in a digital detox, you might want to design your own version that works for you. Before you can do that, it would be helpful to understand why and how digital detoxes work to improve wellbeing.

My colleagues and I recently examined this question in a field experiment that I like to call the ‘dumbphone study’. We randomly assigned half the participants to block their mobile internet for two weeks and the other half to use their phones as they normally would. Specifically, we used an app called Freedom, which prevents people from accessing the internet on their phones. With smartphones thus turned into old-fashioned ‘dumbphones’, people were able to call and text but not do much else. They could still access the internet on their other devices, such as computers and laptops.

At the end of the two weeks, people assigned to the dumbphone condition reported significant improvements in subjective wellbeing and in their mental health (based on a composite of anxiety and depression symptoms, anger, and personality functioning) compared with the other participants.

The improvement was comparable with reversing 10 years of cognitive decline

What was driving these effects? First, removing the internet from people’s phones reduced the overall amount of digital media they consumed. This reduction in and of itself drove some of the positive effects on wellbeing – but not all of them. A second factor had to do with time. Essentially, people spent less time on their phones, freeing up about 2.5 hours per day for other activities, such as reading or spending time outdoors. The final factor had to do with attention: with no internet on their phones, people felt less distracted. This presumably allowed them to enjoy whatever they were doing more, further enhancing their wellbeing.

Beyond the effects on subjective feelings of wellness and attention, we found that, after two weeks in the dumbphone condition, people performed better in a computer task designed to objectively measure the ability to sustain attention. This task is boring by design: it involves having to press the space bar when a picture of a mountain flashes on the screen and not press it otherwise – over and over again. As you can imagine, people with ADHD perform worse on this task, and performance declines with age. The improvement we observed in participants in the dumbphone condition was substantial, comparable with what you’d expect if you reversed about 10 years of cognitive decline.

We can only speculate about how and why dumbphones improved people’s ability to pay attention, but one possibility is that most everyday tasks are no match for the easy and constant stimulation available on our phones. When that easy source of dopamine is restricted, someone might gradually rediscover their ability to derive pleasure from activities that require effort and sustained attention.

It seems, then, that when adopting a digital detox practice, you need to be thinking about three things. First, you need to decide which of the content you consume is ‘toxic’ for your wellbeing – eg, a certain social media app where you tend to doomscroll – and avoid or reduce it. Second, you should replace at least part of this toxic content with non-digital activities that you know make you happy, whether it’s socialising, reading, exercising or other hobbies. Finally, you need to make sure that the change you introduce limits the distractions from your phone, so that you and those around you can reap the full benefits of those activities.

In the dumbphone study, we used an app to help participants keep the internet off their phones. There are many such apps: some allow you to grow a virtual plant the longer you stick to your digital detox plan, while others prompt you to reflect every time you open an app, or add social pressure by informing a friend that you’re trying to break your detox. Unfortunately, there is little research on the effectiveness of these consumer apps. But, as a rule of thumb, the best app for you will be an app that you actually use. For example, an app that blocks mobile internet might be able to improve your ability to concentrate (as we saw in the dumbphone study), but only if you are able to use that feature for more than a day or two. If not, a less restrictive feature, like blocking a single app, might work better.

Keep in mind that you do not need to be permanently detoxing to reap the benefits. Research suggests that practising digital detoxing for as little as a week might have benefits for months afterward. Pick a digital detox practice, resolve to practise it for a meaningful period of at least a week or two, and then decide whether you want to continue for longer or try something else.

In a world that is increasingly dependent on digital technology, reducing the burden on users will likely require a much more fundamental appreciation of the preciousness of our time and attention. It will also require corresponding changes in the form of thoughtful policies and technology design. For the time being, though, digital detox practices offer a degree of self-protection. They are more than just a fad: as we’ve seen, they can actually improve wellbeing, reduce stress, and even spruce up one’s ability to sustain attention. And while spending a retreat with friends and without screens is likely good for wellbeing, research suggests that there are much easier and cheaper ways to practise digital detoxing in our daily lives.

Kostadin Kushlev is an associate professor in the Department of Psychology at Georgetown University in Washington, DC. He studies how phones, social media and our constant connection to the internet affect our social lives and wellbeing.

Related Topics

Top

Stay Connected with IndiaForums!

Be the first to know about the latest news, updates, and exclusive content.

Add to Home Screen!

Install this web app on your iPhone for the best experience. It's easy, just tap and then "Add to Home Screen".