Tag: technology

  • Hacker laws

    90–90 Rule: The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time. … The Dunning-Kruger Effect: If you’re incompetent, you can’t know you’re incompetent. The skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is. … Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law. … Parkinson’s Law: Work expands so as to fill the time available for its completion. … Wheaton’s Law: Don’t be a dick.

  • The ugly objectification behind the world’s first robot artist – Frieze

    With an evangelical gleam in his eye, Meller claims Ai-Da is ‘a new voice’ in art, ‘probing our world from a non-human perspective’. (There are a number of other artists exploring art and A.I. right now, including James Bridle, Ian Cheng, Agnieszka Kurant and Trevor Paglen.) He is justifiably fascinated and worried by the ways in which technology is changing the conditions of life on this planet. Invoking Aldous Huxley and George Orwell, he hopes Ai-Da will provide a way for humans to grasp what machines will bring in the coming decade. But he is the wizard behind the curtain. Ai-Da has no learning capabilities, and in the absence of any affective programming, it’s hard to believe that Ai-Da has a ‘voice’ – whatever philosophers agree that to be. Perhaps there is a flesh-and-blood artist who could use it for productive ends, but at this stage, Ai-Da seems like a research experiment that’s been brought into the world too early, too primitive to tell us much. Despite Meller’s claims – and no matter how many times Ai-Da is referred to in the third person, as if to will it into life – it is not innately creative. It needs electricity. It needs to be switched on and set into ‘drawing mode’ by humans. Ai-Da can’t choose or refuse its subjects, it can’t switch up styles, backtrack, discard work it considers a failure, ascribe meaning to what it makes. Ai-Da is a tool, not an artist.

    Pygmalion’s shadow lurks around the edges of the project. Meller refers to the robot as ‘she’, as if it has independent thought, but acknowledges it’s also an ‘it’ with no autonomy. The humanoid form encourages audiences to engage with what it makes, he argues, and is gendered female so as to amplify the voices of women who have been ignored throughout art history. It’s an act of ugly objectification for a man to think he can solve that problem by making a mechanized woman. Ai-Da could have taken the shape of a Perspex box with a bionic claw poking out of the side, or had long rubber tentacles, or been coated in yellow fur and named Blinky. It did not need to look like a waxwork of a twenty-something woman.

  • Anthropic Economic Index: insights from Claude 3.7 Sonnet – Anthropic

    Briefly, our latest results are the following: Since the launch of Claude 3.7 Sonnet, we’ve observed a rise in the share of usage for coding, as well as educational, science, and healthcare applications; People use Claude 3.7 Sonnet’s new “extended thinking” mode predominantly for technical tasks, including those associated with occupations like computer science researchers, software developers, multimedia animators, and video game designers; We’re releasing data on augmentation / automation breakdowns on a task- and occupation-level. For example, tasks associated with copywriters and editors show the highest amount of task iteration, where the human and model co-write something together. By contrast, tasks associated with translators and interpreters show among the highest amounts of directive behavior—where the model completes the task with minimal human involvement.

  • Free tech eliminates the fear of public speaking – University of Cambridge

    As revealed in a recent publication from Macdonald – Director of the Immersive Technology Lab at Lucy Cavendish College, University of Cambridge – the platform increases levels of confidence and enjoyment for most users after a single 30-minute session. In the most recent trial with students from Cambridge and UCL, it was found that a week of self-guided use was beneficial to 100% of users. The platform helped participants feel more prepared, adaptable, resilient, confident, better able to manage anxiety. […]

    With the new VR platform, a user can experience the sensation of presenting to a wide range of photorealistic audiences. What makes Macdonald’s invention unique is that it uses what he calls ‘overexposure therapy’ where users can train in increasingly more challenging photorealistic situations – eventually leading to extreme scenarios that the user is unlikely to encounter in their lifetime. They might begin by presenting to a small and respectful audience but as they progress, the audience sizes increase and there are more distractions: spectators begin to look disinterested, they walk out, interrupt, take photos, and so on. A user can progress to the point where they can present in a hyper-distracting stadium environment with loud noises, panning stadium lights and 10,000 animated spectators.

  • Filtered for the rise of the well-dressed robots – Interconnected

    The way I understand it, there have been three major challenges with robots in the real world: mechanical engineering, perception, and instruction following. Engineering has been solved for a while; perception mostly works, though not understanding. Instruction following, including contextual awareness, task sequencing, and safety… that was a work in progress. Solved at a stroke by gen-AI. So, as of a couple years ago, there is clear line of sight to humanoid robots in the market. Research done, development phase: go.

  • MIS market churn spring 2025 – WhichMIS?

    Looking across the January census figures from 2021 to this year, we see that the SIMS school numbers have fallen dramatically, from a healthy 15,753 schools using their MIS in January 2021 to just 8,818 this year. That is a loss of some 6,935 schools in just four years! This means that some 44% of their schools have moved away from SIMS in that time. It reduces their market share from 67% in 2021 to just 40% now. Looking further back, SIMS was the dominant player for many years, with around 85% of the market in England only ten years ago…

  • Powerful A.I. is coming. We’re not ready. – The New York Times

    Maybe A.I. progress will hit a bottleneck we weren’t expecting — an energy shortage that prevents A.I. companies from building bigger data centers, or limited access to the powerful chips used to train A.I. models. Maybe today’s model architectures and training techniques can’t take us all the way to A.G.I., and more breakthroughs are needed. But even if A.G.I. arrives a decade later than I expect — in 2036, rather than 2026 — I believe we should start preparing for it now.

    Most of the advice I’ve heard for how institutions should prepare for A.G.I. boils down to things we should be doing anyway: modernizing our energy infrastructure, hardening our cybersecurity defenses, speeding up the approval pipeline for A.I.-designed drugs, writing regulations to prevent the most serious A.I. harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills. These are all sensible ideas, with or without A.G.I.

  • It is as if you were on your phone: Why – Pippin Barr

    So what if we had an application on our phone that allowed us to seem to be on our phone, to go through those reassuring motions, to know what to do, to appear 100% like a human on their phone, but without having to actually be on our phone an exposed to the direness of the news, the panic of dating, the shitpile of social media, the emptiness of online video, the timesuck of games? A kind of contentless experience. For the win!

    That’s the underlying speculative but also totally honest motivation behind this particular game. I’m making it because I think it’s legitimately something people might use and find helpful and because it is fundamentally funny that that is a possible design goal. To me it’s both a piece of comedy and a piece of truth and I can’t tell which is more important or if they’re even distinct. (And I like that.)

  • How to disappear completely – The Verge

    The loss of content is not a new phenomenon. It’s endemic to human societies, marked as we are by an ephemerality that can be hard to contextualize from a distance. For every Shakespeare, hundreds of other playwrights lived, wrote, and died, and we remember neither their names nor their words. (There is also, of course, a Marlowe, for the girlies who know.) For every Dickens, uncountable penny dreadfuls on cheap newsprint didn’t withstand the test of decades. For every iconic cuneiform tablet bemoaning poor customer service, countless more have been destroyed over the millennia.

    This is a particularly complex problem for digital storage. For every painstakingly archived digital item, there are also hard drives corrupted, content wiped, media formats that are effectively unreadable and unusable, as I discovered recently when I went on a hunt for a reel-to-reel machine to recover some audio from the 1960s. Every digital media format, from the Bernoulli Box to the racks of servers slowly boiling the planet, is ultimately doomed to obsolescence as it’s supplanted by the next innovation, with even the Library of Congress struggling to preserve digital archives.

  • The first AI bookmark for physical readers – Mark

    Unlock your intellectual potential. Introducing Mark 1, the physical bookmark that tracks and summarizes the pages you read. … Designed to integrate effortlessly into your reading routine, Mark enhances your experience without disrupting your flow.

  • Antiqua et Nova: Note on the relationship between artificial intelligence and human intelligence – The Holy See

    Drawing an overly close equivalence between human intelligence and AI risks succumbing to a functionalist perspective, where people are valued based on the work they can perform. However, a person’s worth does not depend on possessing specific skills, cognitive and technological achievements, or individual success, but on the person’s inherent dignity, grounded in being created in the image of God. This dignity remains intact in all circumstances, including for those unable to exercise their abilities, whether it be an unborn child, an unconscious person, or an older person who is suffering. It also underpins the tradition of human rights (and, in particular, what are now called “neuro-rights”), which represent “an important point of convergence in the search for common ground” and can, thus, serve as a fundamental ethical guide in discussions on the responsible development and use of AI. Considering all these points, as Pope Francis observes, “the very use of the word ‘intelligence’” in connection with AI “can prove misleading” and risks overlooking what is most precious in the human person. In light of this, AI should not be seen as an artificial form of human intelligence but as a product of it.[…]

    Furthermore, there is the risk of AI being used to promote what Pope Francis has called the “technocratic paradigm,” which perceives all the world’s problems as solvable through technological means alone. In this paradigm, human dignity and fraternity are often set aside in the name of efficiency, “as if reality, goodness, and truth automatically flow from technological and economic power as such.” Yet, human dignity and the common good must never be violated for the sake of efficiency, for “technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary, aggravate inequalities and conflicts, can never count as true progress.” Instead, AI should be put “at the service of another type of progress, one which is healthier, more human, more social, more integral.”

  • From COBOL to chaos: Elon Musk, DOGE, and the Evil Housekeeper Problem – MIT Technology Review

    In trying to make sense of the wrecking ball that is Elon Musk and President Trump’s DOGE, it may be helpful to think about the Evil Housekeeper Problem. It’s a principle of computer security roughly stating that once someone is in your hotel room with your laptop, all bets are off. Because the intruder has physical access, you are in much more trouble. And the person demanding to get into your computer may be standing right beside you. So who is going to stop the evil housekeeper from plugging a computer in and telling IT staff to connect it to the network?

  • This Pixar-style dancing lamp hints at Apple’s future home robot – The Verge

    When the researcher in the video plays music, the “Expressive” robot lamp dances with her; when she asks about the weather, it looks outside first; when she’s working on an intricate project, it follows her movements to shed light more helpfully; when it reminds her to drink water, it pushes the glass toward her. When she tells it it can’t come out on a hike with her, it hangs its head in faux sadness.

  • Treasury official quits after resisting Musk’s requests on payments – The New York Times

    Mr. Musk has been fixated on the Treasury system as a key to cutting federal spending. Representatives from his government efficiency initiative began asking Mr. Lebryk about source code information related to the nation’s payment system during the presidential transition in December, according to three people familiar with the conversations. Mr. Lebryk raised the request to Treasury officials at the time, noting that it was the type of proprietary information that should not be shared with people who did not work for the federal government. Members of the departing Biden administration were alarmed by the request, according to people familiar with their thinking. The people making the requests were on the Trump landing team at the Treasury Department, according to a current White House official.

  • A 25-year-old with Elon Musk ties has direct access to the federal payment system – WIRED

    A source says they are concerned that data could be passed from secure systems to DOGE operatives within the General Services Administration. WIRED reporting has shown that Elon Musk’s associates—including Nicole Hollander, who slept in Twitter’s offices as Musk acquired the company, and Thomas Shedd, a former Tesla engineer who now runs a GSA agency, along with a host of extremely young and inexperienced engineers—have infiltrated the GSA and have attempted to use White House security credentials to gain access to GSA tech, something experts have said is highly unusual and poses a huge security risk.

  • Google drops pledge not to use AI for weapons or surveillance – The Washington Post

    Google’s principles restricting national security use cases for AI had made it an outlier among leading AI developers. Late last year, ChatGPT-maker OpenAI said it would work with military manufacturer Anduril to develop technology for the Pentagon. Anthropic, which offers the chatbot Claude, announced a partnership with defense contractor Palantir to help U.S. intelligence and defense agencies access versions of Claude through Amazon Web Services. Google’s rival tech giants Microsoft and Amazon have long partnered with the Pentagon.[…]

    Google’s policy change is a company shift in line with a view within the tech industry, embodied by President Donald Trump’s top adviser Elon Musk, that companies should be working in service of U.S. national interests. It also follows moves by tech giants to publicly disavow their previous commitment to race and gender equality and workforce diversity, policies opposed by the Trump administration.

  • China to host human vs. robot half marathon race – Moss and Fog

    Well, it’s begun. Our era of humanoid robots interacting with us in real, tangible ways. In April, Beijing is hosting a half marathon where humans will compete alongside bipedal (walking/running) robots. The 21-kilometer race will showcase over 12,000 determined human runners alongside more than 20 teams of cutting-edge humanoid robots, developed by leading manufacturers from across the globe. The robots are not allowed to use wheels, and must complete the full race. They will be a combination of remote-controlled robots, and fully autonomous ones. And their handlers will be able to swap out their batteries during the race.

  • What DeepSeek may mean for the future of journalism and generative AI – Reuters Institute for the Study of Journalism

    I don’t think DeepSeek is going to replace OpenAI. In general, what we’re going to see is that more companies enter the space and provide AI models that are slightly differentiated from one another. If many actors choose to take the resource-intensive route, that multiplies the resource intensity and that might be alarming. But I’m hopeful that DeepSeek is going to lead to the generation of other AI companies that enter this space with offerings that are far cheaper and far more resource-efficient. […]

    Sometimes, I see commentary on DeepSeek along the lines of, ‘Should we be trusting it because it’s a Chinese company?’ No, you shouldn’t be trusting it because it’s a company. And also, ‘What does this mean for US AI leadership?’ Well, I think the interesting question is, ‘What does this mean for OpenAI leadership?’

    American firms now have leaned into the rhetoric that they’re assets of the US because they want the US government to shield them and help them build up. But a lot of the time, the actual people who are developing these tools don’t necessarily think in that frame of mind and are thinking more as global citizens participating in a global corporate technology race, or global scientific race, or a global scientific collaboration. I would encourage journalists to think about it that way too.

  • Without universal AI literacy, AI will fail us – World Economic Forum

    For example, facial analysis software has been recorded failing to recognize people with dark skin, showing a 1-in-3 failure rate when identifying darker-skinned females. Other AI tools have denied social security benefits to people with disabilities. These failings are due to bias in data and lack of diversity in the teams developing AI systems. According to the Forum’s 2021 Global Gender Gap report, only 32% of those in data and AI roles are women. In 2019, Bloomberg reported that less than 2% of technical employees at Google and Facebook were black. […]

    We cannot leave the burden of AI responsibility and fairness on the technologists who design it. These tools affect us all, so they should be affected by us all — students, educators, non-profits, governments, parents, businesses. We need all hands on deck.

  • The case for kicking the stone – Los Angeles Review of Books

    The central problem, however, is that an onslaught of information—of everything, all at once—flattens all sense of proportion. When Zuckerberg said to his staff that “a squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa,” it’s not that his tone-deaf observation was untrue but that, as Carr says, he was making a category error, equating two things that cannot be compared. Yet “social media renders category errors obsolete because it renders categories obsolete. All information belongs to a single category—it’s all ‘content.’” And very often, the content that matters is decided in the currency of commerce: content is “bad” when it harms profits.

  • The future is too easy – Defector

    There is something unstable at the most basic level about any space with too much capitalism happening in it. The air is all wrong, there’s simultaneously too much in it and not enough of it. Everyone I spoke to about the Consumer Electronics Show before I went to it earlier this month kept describing it in terms that involved wetness in some way. I took this as a warning, which I believe was the spirit in which it was intended, but I felt prepared for it. Your classically damp commercial experiences have a sort of terroir to them, a signature that marks a confluence of circumstances and time- and place-specific appetites; I have carried with me for decades the peculiar smell, less that of cigarette smoke than cigarette smoke in hair, that I remember from a baseball card show at a Ramada Inn that I attended as a kid. Only that particular strain of that particular kind of commerce, at that moment, gave off that specific distress signal. It was the smell of a living thing, and the dampness in the (again, quite damp) room was in part because that thing was breathing, heavily.

  • Climate, technology, and justice – Data & Society

    Climate change is perhaps the most urgent issue of the 21st century. The changing climate already disproportionately impacts communities in the majority world, and energy-intensive technologies like generative AI make the problem worse, exacerbating global emissions. Data & Society’s Climate, Technology, and Justice program investigates how technologies impact and influence the environment, and how communities participate in or resist these processes. We examine the social and environmental repercussions of the expanded global infrastructures and labor practices needed to sustain the growth of digital technologies, from AI and blockchain to streaming and data storage. We trace the environmental implications of technology development across the entire life cycle, from ideation and use to disposal or refurbishment. We also seek to better understand the sociotechnical implications of climate-focused technologies, from low-carbon innovations like community energy, solar, and wind turbines, to the integration of algorithms and AI into climate modeling, disaster prediction, and emissions tracking.

  • Saving one screen at a time – Tedium

    Having seen a lot of pipes, wavy lines, and flying toasters in my day, there was a real novelty to the art of screen savers, which became another way to put your visual mark on the devices you own. The animated screen saver is still out there, of course, but its cultural relevance has faded considerably. In fact, GNOME, one of the two dominant window managers in the FOSS world (particularly on Linux), straight-up doesn’t support graphical screen savers in modern versions, unless you’re willing to get hacky. And it’s not like people kick up colorful screen savers on their smartphones or tablets. But maybe we’re thinking about screen savers all wrong in terms of their cultural role. When it comes to screen savers, what if GNOME has it right? Today’s Tedium ponders the screen saver, including how we got it and what it represents today.

  • Better without AI

    Better without AI explores moderate apocalypses that could result from current and near-future AI technology. These are relatively overlooked risks: not extreme sci-fi extinction scenarios, nor the media’s obsession with “ChatGPT said something naughty” trivia. Rather: realistically likely disasters, up to the scale of our history’s worst wars and oppressions. Better without AI suggests seven types of actions you, and all of us, can take to guard against such catastrophes—and to steer us toward a future we would like.

  • Signature moves: are we losing the ability to write by hand? – The Guardian

    It is popular to assume that we have replaced one old-fashioned, inefficient tool (handwriting) with a more convenient and efficient alternative (keyboarding). But like the decline of face-to-face interactions, we are not accounting for what we lose in this tradeoff for efficiency, and for the unrecoverable ways of learning and knowing, particularly for children. A child who has mastered the keyboard but grows into an adult who still struggles to sign his own name is not an example of progress.

  • Century-scale storage – Harvard Law School Innovation Library Hub

    We are on the brink of a dark age, or have already entered one. The scale of art, music, and literature being lost each day as the World Wide Web shifts and degenerates represents the biggest loss of human cultural production since World War II. My generation was continuously warned by teachers, parents, and authority figures that we should be careful online because the internet is written in ink, and yet it turned out to be the exact opposite. As writer and researcher Kevin T. Baker remarked, “On the internet, Alexandria burns daily.”

  • The Augmented City: Seeing Through Disruption – Jacobs Institute at Cornell Tech (pdf)

    What is the next disruptive technology to reshape the urban public realm? And how can they better anticipate its effects upon arrival? … What are future uses of augmented reality in cities, and what are the implications for managing public space and safety? […]

    This report explores future threats and opportunities for cities posed by the next wave of potentially disruptive technologies, headlined by AI and AR. Before further unpacking these futures, it’s important to define key terms, technologies, and context — such as the difference between augmented-, virtual-, and mixed-reality (not to mention “spatial computing”). In addition, how do practices such as “luxury surveillance” and “digital redlining” combine to create “diminished reality?” And does “the metaverse” really mean anything at this point? (Not really.)

  • Tech right (disambiguation) – Jasmine Sun

    I was most surprised to see that right-wing rebukes of the “Tech Right” are near-identical to those that left tech critics make (toward different ends). It makes you wonder, as Andreessen and Pethokoukis both pose, whether the more important division is not left/right but rather accel/decel.

  • Your next AI wearable will listen to everything all the time – WIRED

    In the app, you can see a summary of the conversations you’ve had throughout the day, and at the day’s end, it generates a snippet of what the day was like and has the locations of where you had these chats on a map. But the most interesting feature is the middle tab, which is your “To-Dos.” These are automatically generated based on your conversations. I was speaking with my editor and we talked about taking a picture of a product, and lo and behold, Bee AI created a to-do for me to “Remember to take a picture for Mike.” (I must have said his name during the conversation.) You can check these off if you complete them. It’s worth pointing out that these to-do’s are often not things I need to do.

  • ‘Hey, Gemini!’ Mega Galaxy S25 leak confirms major AI upgrades and lots more – Android Authority

    The leaked image above shows that the Galaxy S25 series is getting a new “Now Brief” feature that will provide users a personalized summary of their day. It feels like a rehash of the Google Now feature from yesteryears. The image shows that Now Brief will include cards with information about the weather, suggestions for using different features, a recap of images clicked during the day, daily activity goals, and more. We’re guess[ing] the feature will use AI to collate all this information from various apps and other connected Galaxy devices.

  • iOS 18.3 temporarily removes notification summaries for news – MacRumors

    Apple is making changes to Notification Summaries following complaints that the way ‌Apple Intelligence‌ aggregated news notifications could lead to false headlines and confused customers. Several BBC notifications, for example, were improperly summarized, providing false information to readers.

  • ‘Mainlined into UK’s veins’: Labour announces huge public rollout of AI – The Guardian

    Under the 50-point AI action plan, an area of Oxfordshire near the headquarters of the UK Atomic Energy Authority at Culham will be designated the first AI growth zone. It will have fast-tracked planning arrangements for data centres as the government seeks to reposition Britain as a place where AI innovators believe they can build trillion-pound companies. Further zones will be created in as-yet-unnamed “de-industrialised areas of the country with access to power”. Multibillion-pound contracts will be signed to build the new public “compute” capacity – the microchips, processing units, memory and cabling that physically enable AI. There will also be a new “supercomputer”, which the government boasts will have sufficient AI power to play itself at chess half a million times a second. Sounding a note of caution, the Ada Lovelace Institute called for “a roadmap for addressing broader AI harms”, and stressed that piloting AI in the public sector “will have real-world impacts on people”.

  • Things we learned about LLMs in 2024 – Simon Willison’s Weblog

    A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
    ai chatbots computing llm technology

  • The most polarizing thing on wheels – Texas Monthly

    The Cybertruck is fully inside this tradition, loaded with new technologies and new materials that tend to malfunction. Tesla has already recalled the Cybertruck six times. The fixes include a piece of the truck bed that could come loose while driving; a faulty windshield wiper motor; a pedal that can get stuck while accelerating; and a display time-lag on the rearview camera. The company also issued a “stop sale” on the truck’s wheel covers, which damage the tires after a few thousand miles of driving. These issues are but a few examples of the vast number of more commonplace complaints that pervade the internet but which the company has not addressed. Leaky truck bed covers, chronic error screens, and fast-dying batteries are subject to no recalls at all. Yet Tesla’s chronic deficiencies, which would destroy a company like Ford or Toyota, have somehow not interfered with its relentless success, nor prevented Musk from becoming the richest man in the world, largely on the strength of Tesla’s stock price. It’s just the way Tesla rolls.

  • The world of tomorrow – Works in Progress

    As a child, I felt lucky to be born in 1960. I’d be only 40 in the year 2000 and might live half my life in the magical new century. By the time I was a teenager, however, the spell had broken. The once-enticing future morphed into a place of pollution, overcrowding, and ugliness. Limits replaced expansiveness. Glamour became horror. Progress seemed like a lie.

    Much has been written about how and why culture and policy repudiated the visions of material progress that animated the first half of the twentieth-century, including a special issue of this magazine inspired by J Storrs Hall’s book Where Is My Flying Car? The subtitle of James Pethokoukis’s recent book The Conservative Futurist is ‘How to create the sci-fi world we were promised’. Like Peter Thiel’s famous complaint that ‘we wanted flying cars, instead we got 140 characters’, the phrase captures a sense of betrayal. Today’s techno-optimism is infused with nostalgia for the retro future.

  • WhichMIS?

    WhichMIS? is a free online publication for schools, multi-academy trusts and the wider education industry. It aims to present a balanced view of the MIS landscape in the UK, with views from all the key market players, as well as reviews, the latest news and expert commentary.

  • The problem with AI is about power, not technology – Jacobin

    Employers invoke the term AI to tell a story in which technological progress, union busting, and labor degradation are synonymous. However, this degradation is not a quality of the technology itself but rather of the relationship between capital and labor. The current discussion around AI and the future of work is the latest development in a longer history of employers seeking to undermine worker power by claiming that human labor is losing its value and that technological progress, rather than human agents, is responsible. […]

    AI, in other words, is not a revolutionary technology, but rather a story about technology. Over the course of the past century, unions have struggled to counter employers’ use of the ideological power of technological utopianism, or the idea that technology itself will produce an ideal, frictionless society. (Just one telling example of this is the name General Motors gave its pavilion at the 1939 World’s Fair: Futurama.) AI is yet another chapter in this story of technological utopianism to degrade labor by rhetorically obscuring it. If labor unions understand changes to the means of production outside the terms of technological progress, it will become easier for unions to negotiate terms here and now, rather than debate what effect they might have in a vague, all too speculative future.

  • AI is making Philippine call center work more efficient, for better and worse – Rest of World

    Bajala says each of his calls at Concentrix is monitored by an artificial intelligence (AI) program that checks his performance. He says his volume of calls has increased under the AI’s watch. At his previous call center job, without an AI program, he answered at most 30 calls per eight-hour shift. Now, he gets through that many before lunchtime. He gets help from an AI “co-pilot,” an assistant that pulls up caller information and makes suggestions in real time. “The co-pilot is helpful,” he says. “But I have to please the AI. The average handling time for each call is 5 to 7 minutes. I can’t go beyond that.” “It’s like we’ve become the robots,” he said. […]

    It works like this, the workers said: a sentiment analysis program could be deployed in real time to detect the mood of a conversation. It could also work retroactively, as part of an advanced speech analysis program that transcribes the conversation and judges the emotional state of the agent and caller. Bajala said the program scores him on his tone, his pitch, the mood of the call, his use of positive language, if he avoided interrupting or speaking over a caller, how long he put the caller on hold, and how quickly he resolved the issue. Bajala said he nudges customers toward high-scoring responses: “yes,” “perfect,” “great.” Every stutter, pause, mispronounced word, or deviation from a script earns him a demerit. The program grades Bajala, and, though his base pay remains fixed, continually underperforming could mean probation, no incentives, or even termination, he said. “AI is supposed to make our lives easier, but I just see it as my boss,” he said.

  • Electric Dreams: Art and technology before the internet – Tate

    As the field gained international popularity in the 1960s, second-generation cyberneticists introduced principles of ‘observation’ and ‘influence’. This allowed them to link systems together into complex ecologies. Cybernetics became applied more widely to various social, environmental and philosophical contexts. It developed a cultural dimension among the 1960s hippie counterculture. They experimented with new technologies alongside their interest in alternative lifestyles and mind-altering experiences.

    Many artists and thinkers turned to cybernetics to make sense of a newly interconnected world, increasingly driven by technological development and interactions with machines. As a field concerned with constructing systems, cybernetics also holds the potential to dismantle existing structures and rebuild them anew. Artists responded to these ideas by creating systems-based works that performed creative acts with minimal human intervention, or which responded in real-time to the interactions of their viewers

  • Alexa’s new AI brain is stuck in the lab – Bloomberg

    It’s true that Alexa is little more than a glorified kitchen timer for many people. It hasn’t become the money maker Amazon anticipated, despite the company once estimating that more than a quarter of US households own at least one Alexa-enabled device. But if Amazon can capitalize on that reach and convince even a fraction of its customers to pay for a souped-up AlexaGPT, the floundering unit could finally turn a profit and secure its future at an institutionally frugal company. If Amazon fails to meet the challenge, Alexa may go down as one of the biggest upsets in the history of consumer electronics, on par with Microsoft’s smartphone whiff.

  • ‘We were wrong’: An oral history of WIRED’s original website – WIRED

    Ian: Back in those days, we’d say, The nice thing about the internet is how safe it is. Everybody’s there to help you, and everybody just wants to do good things. People asked, Why require passwords for stuff, because who’s going to do anything terrible on the internet?

    Kevin: Today, a new thing comes along and people immediately say, “I don’t know what it is, but it’s going to hurt me. It’s going to bite me.” That’s definitely a change that wasn’t present when we were starting.

    Jeff: But nostalgia can be dangerous. It was really hard what we did, and stressful, and we didn’t know what we were doing. When people say, “If we could only go back to then,” I’m like, no, we only had modems. It was terrible.

    John P: As a business, HotWired failed. But all that stuff that we were doing, it was scientific investigation.

    Jonathan: We thought the internet was going to be good for people. We were wrong.

    Jeff: I still feel like literally anybody with an idea can start hacking on the web or making apps or things like that. That’s all still there. I think the nucleus of what we started back then still exists on the web, and it still makes me really, really happy.

    John: We were lucky with WIRED. With HotWired there was no choice, and we couldn’t do it differently if we went back and tried. But we were unlucky to be first.

  • AI Safety for Fleshy Humans by Nicky Case & Hack Club

    This 3-part series is your one-stop-shop to understand the core ideas of AI & AI Safety* — explained in a friendly, accessible, and slightly opinionated way!