<![CDATA[ PCGamer ]]> https://www.pcgamer.com Tue, 09 Jul 2024 01:27:10 +0000 en <![CDATA[ Google's AI visionary says we'll 'expand intelligence a millionfold by 2045' thanks to nanobots, the tech will resurrect the dead, and we're all going to live forever ]]> AI is undoubtedly the biggest technology topic of the last decade, with mind-bogglingly vast resources from companies including Google, OpenAI and Microsoft being poured into the field. Despite that the results so far are somewhat mixed. Google's AI answers are often just straight-up dumb (and incidentally are behind a 50% increase in the company's greenhouse gas emissions over the last five years), AI imagery and videos are filled with obvious errors, and the chatbots… well, they're a bit better, but they're still chatbots. 

One man, however, both predicted this level of interest and certain elements of how AI is developing. The Guardian has a new interview with Ray Kurzweil, a futurist and computer scientist best-known for his 2005 book The Singularity is Near, with the "Singularity" being the melding of human consciousness and AI. Kurzweil is an authority on AI, and his current job title is remarkable: he is "principal researcher and AI visionary" at Google. 

The Singularity is Near predicted that AI would reach the level of human intelligence by 2029, while the great merging of our brains with AI will occur around 2045. Now he's back with a follow-up called The Singularity is Nearer, a title which doesn't need much explanation. Strap yourself in for a dose of what some might call techno-futurism, while others may prefer the term dystopian madness.

Kurzweil stands by his 2005 predictions, and reckons 2029 remains an accurate date for both "human-level intelligence and for artificial general intelligence(AGI)–which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects." He reckons there may be a few years beyond this where AI can't surpass "the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights," but eventually "it will."

The real nightmare fuel comes with Kurzweil's notion of the Singularity, which he views as a positive thing and makes some absolutely wild claims about. "We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots—robots the size of molecules—that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness."

Claiming that your field is going to "expand intelligence a millionfold" is the kind of total hubris that belongs at the start of a bad science fiction novel, and strikes me as so abstract as to be essentially meaningless. We don't even understand how our own brains work, so the notion that they can both be replicated and altered to the whims of people like Kurzweil strikes me as deeply unattractive. Let's be clear, we are talking about changing peoples' brains and physiology by injecting them with nanomachines. I somehow don't think that's all going to go as swimmingly as some advocates claim.

The AI visionary acknowledges "People do say 'I don’t want that'" and then argues "they thought they didn’t want phones either!" Kurzweil returns to the theme of phones when discussing accessibility, and the notion that AI advancements will disproportionately benefit the rich: "When [mobile] phones were new they were very expensive and also did a terrible job [...] Now they are very affordable and extremely useful. About three quarters of people in the world have one… this issue goes away over time."

Ray Kurzweil, Google's AI visionary, speaking at SXSW.

Ray Kurzweil speaking at SXSW. (Image credit: Diego Donamaria via Getty Images)

Live forever

My first plan is to stay alive, reaching longevity escape velocity. I’m also intending to create a replicant of myself. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback.

Ray Kurzweil

Hmm. Kurzweil has a chapter on "perils" in the new book, but seems quite relaxed about the possibility of doomsday scenarios. "We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive."

I straight-up do not believe that and do not trust these big tech companies or their research teams to prioritise safety over AI advancement. Nothing in tech has ever worked this way, and even though it's now somewhat dated the Silicon Valley philosophy of "move fast and break things" seems to perfectly encapsulate the current AI craze.

Kurzweil's life and work is all bound up with this technology, of course, so you would expect him to be making the optimistic case. Even so, the following is where I check out: immortality.

"In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress," says Kurzweil. "And as we move past that we’ll actually get back more years. It isn’t a solid guarantee of living forever—there are still accidents—but your probability of dying won’t increase year to year. The capability to bring back departed humans digitally will bring up some interesting societal and legal questions."

Ryan Gosling looking worse for wear looking up lit by purple light

(Image credit: Warner Bros.)

AI is going to raise the dead! I really have heard it all now. As for Kurzweil himself: "My first plan is to stay alive, reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s. I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him."

The phrase "a little bit" is doing a lot of heavy lifting there, because what Kurzweil means is that the replicant of his father was not, in fact, like his father. The interview ends on the note that "it is not going to be us versus AI: AI is going inside ourselves."

Well. Kurzweil is a hugely respected figure, and holds significant sway within the AI field. I'm just blown away by how much of this he seems to think is desirable, nevermind achievable, and the breezy way with which the manifold potential problems with this technology are dismissed. In 10 years we'll be increasing our life expectancy with nanobots, and in 20 we'll all be some sort of human-hardware hybrid with our brains dominated by software we don't understand and don't control on a personal level. Oh, and we'll be resurrecting the dead as digital avatars.

AI is a technology that is currently defined not by what it can do, but by what its advocates promise it will be able to do. And who knows, Kurzweil may well turn out to be right about everything. But personally speaking, I quite like being me, and I have no real desire to bring dead relatives back to life through ghoulish software approximations. Some might call this playing god, but I prefer to put it another way. This whole philosophy is as mad as a badger in a cake shop, and will end just as well.

]]>
https://www.pcgamer.com/software/ai/googles-ai-visionary-says-well-expand-intelligence-a-millionfold-by-2045-thanks-to-nanobots-the-tech-will-resurrect-the-dead-and-were-all-going-to-live-forever nuRtNcB6XuCVkHZ28P4XpY Mon, 08 Jul 2024 18:25:02 +0000
<![CDATA[ An infamous dataset of leaked login details, updated last week, now houses 9,948,575,739 passwords and poses the biggest threat to our online security ever ]]> Check your passwords, people, because if there was ever a good reason to not reuse the same password, or even variants of the same password, then the latest version of the RockYou collection of leaked or stolen passwords must surely be it. With almost 10 billion unique passwords, the dataset is the largest source of genuine login details, from all around the world, making the risk of cyberattacks as high as it's ever been.

The astonishing number was reported by Cybernews (via Sweclockers) after the updated dataset was posted on a forum used by hackers. Back in 2009, social media company RockYou suffered a data breach in which 32 million user accounts were compromised. Over a decade later, in 2023, a 100 GB text file titled RockYou2021 was posted on hacking forums.

It contained around 8.5 billion passwords, making it then the largest dataset of leaked login details since the 3.2 billion COMB collection in 2022. Now, RockYou2024 is larger still and holds just shy of 10 billion unique email addresses and passwords. Even if one accounts for the fact that every person who's online will have multiple login accounts, the figure is sufficiently large enough to be of major concern.

The biggest danger the compilation poses is that the information can be used to increase the success of credential stuffing, a type of brute force attack that runs through multiple login attempts to gain access to an account. Not only does this put individuals at risk of identity theft, but it also increases the chances of the business hosting the online account from suffering a comprehensive data breach.

This information is then fed back into the RockYou dataset, making it increasingly more potent. Any decent cloud or hosting service will have mechanisms to combat brute force attacks but if a login appears genuine (because it's using a valid email address and password), then there's little the service can do to prevent access.

If this news comes across as being very alarming, then that's a good thing. Because it means people are more likely to take action to prevent the situation from becoming worse.

If you're wondering what exactly you should do, then here's my advice. Never assume that any of your online accounts are safe and never use the same password for any of them—even variations of the same password are risky to use.

I strongly recommend that you change your passwords now, using a combination of three words that you can easily remember, making sure to include numbers and special characters. For any account that offers it, also make sure you enable two-factor or multi-factor authentication (2FA/MFA).

Cybernews offers a password checking service and you can use this to see if a specific password appears in the RockYou2024 dataset. It's safe to do this because you're not providing any other details, such as an email address, that would identify the password with a particular account. Even if one of your passwords isn't in the database, I still recommend that you add a layer of security to your online accounts. If it doesn't offer one, then it's even more important you change the password to a large and complex one right now.

]]>
https://www.pcgamer.com/software/security/an-infamous-dataset-of-leaked-login-details-updated-last-week-now-houses-9948575739-passwords-and-poses-the-biggest-threat-to-our-online-security-ever qN85xhjE5VWrZQSLLX4JNH Mon, 08 Jul 2024 15:11:22 +0000
<![CDATA[ Five new Steam games you probably missed (July 8, 2024) ]]>
Best of the best

Baldur's Gate 3 - Jaheira with a glowing green sword looks ready for battle

(Image credit: Larian Studios)

2024 games: Upcoming releases
Best PC games: All-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best MMOs: Massive worlds
Best RPGs: Grand adventures

On an average day about a dozen new games are released on Steam. And while we think that's a good thing, it can be understandably hard to keep up with. Potentially exciting gems are sure to be lost in the deluge of new things to play unless you sort through every single game that is released on Steam. So that’s exactly what we’ve done. If nothing catches your fancy this week, we've gathered the best PC games you can play right now and a running list of the 2024 games that are launching this year. 

Pomberito

Steam‌ ‌page‌ ‌
Release:‌ June 25
Developer:‌ Lara the Pitbull

Pomberito is a first-person narrative-driven horror game set in rural Argentina. It follows the daily life of a farmer toiling on a remote property, whose plot of land appears to be haunted by a Pombero: a hairy, "mythical humanoid creature of small stature" according to its Wikipedia entry. As offputting as that may be, a farmer cannot neglect their duties, nor can they let any Pombero get in the way of a good yield. The game comprises five "micro-episodes" which can culminate in four distinct endings, and is definitely one for horror enthusiasts who prefer suspense and psychological discomfort over pedal-to-the-metal gore. 

Kingdom, Dungeon, and Hero

Steam‌ ‌page‌ ‌
Release:‌ July 2
Developer:‌ Kraken Studios

Kingdom, Dungeon and Hero is one of those richly complicated hex-based military strategy games that can only exist on PC. The turn-based politicking takes place on a map that can host up to 50 kingdoms, and in addition to the prickly army building and warfare planning, there's also resource gathering and management, and no small amount of diplomacy. While the interface looks streamlined and modern (as streamlined as these things can get) the map itself is charmingly reminiscent of PC games from the days of yore, by which I mean, the mid '90s.

Retale

Steam‌ ‌page‌ ‌
Release:‌ July 4
Developers:‌ Grocery Guys

Retale follows the travails of a retail worker in a big, fluoro-lit, boomer-abundant department store. Played from a top-down perspective, it mirrors the frantic pace of Overcooked!, though Retale is single-player only and a lot more caustic. For instance, keeping customers happy via careful, receptive listening to complaints is definitely one way to go about your job, but throwing a box at them is another (more enjoyable) way. Success can come in many and varied forms! For the most part you'll be doing what you're employed to do: stacking shelves, but you'll need to hone your build if you want to get Employee of the Month. 

SpaceCraft Brawl

Steam‌ ‌page‌ ‌
Release:‌ July 1
Developer:‌ ruploz

Launched into Early Access last week, SpaceCraft Brawl is a game about creating your own spaceships and then pitting them against other player's creations. The grid-based creation workshop allows for endless tinkering with form factor, weaponry, shields and movement, but there's a bunch of pre-existing designs if you're not into building. Which, to be honest, would seem to be missing the point, because SpaceCraft Brawl is all about the fun of seeing if your meticulously constructed (or flat out bizarre) creations can stand a chance in battle. There are a bunch of different weapons and abilities (with more to come down the line) and a map editor too. The Early Access period doesn't have a specified time frame, but there seems to be plenty to sink your teeth into already.

Angelstruck

Steam‌ ‌page‌ ‌
Release:‌ July 6
Developer:‌ Feral Paw

Angelstruck is a slick, bullet hell schmup roguelite in a dark fantasy skin. The much put-upon protagonist moves left and right as waves of flying enemies descent upon them, with respite from the relentless pace coming in the form of random power-ups. These boons can—in the spirit of the genre—dramatically change the trajectory of a run, and synergizing abilities can lead to utterly broken runs. Definitely good Steam Deck fodder.

]]>
https://www.pcgamer.com/software/platforms/five-new-steam-games-you-probably-missed-july-8-2024 VHAL9kdbr3Rp4Herxqzx4E Mon, 08 Jul 2024 01:21:24 +0000
<![CDATA[ Microsoft patents a technique to display encrypted documents so only you can see them ]]> If you're working on an important document in a busy environment and don't want people to see what you're doing at a glance, then you could use a privacy screen on the display or an application that dims areas not in your gaze. Microsoft has patented a system that it believes is superior to both because it makes the document fully encrypted and illegible at all times—apart from the section you're directly looking at on the display.

The patent's details (via Windows Report) are like all such publications, in that specific details on exactly how everything works aren't covered. Instead, a broad overview of the nature of the technology is given and what Microsoft has proposed is a system that takes documents you're working on and encrypts them in such a way that the contents are secure but the overall structure of the text remains the same.

That's even the case when you pull the document up for display on your PC's screen. But then the clever bit kicks in. Using a suitable webcam or another device that can track the movement of your eyes, the algorithm determines exactly where your focus is and uses the information to generate an alpha-blending mask—think of this as being like a 'hole' in the encrypted document that lets you see the original material underneath.

As your eyes move about, the 'hole' follows along to ensure that you don't suddenly hit a line of unintelligible text. Statistical methods can be used to predict where the eye's motion will lead, reducing the latency between the eye tracking and mask movement. Anyone else looking at the screen will just see gibberish.

Microsoft's algorithm takes into account that we use our peripheral vision quite a lot when we read, even though the text isn't directly in focus, and the edge of the mask isn't a hard line between normal and encrypted text. It also takes into account that eye motion isn't perfectly smooth (aka saccades).

Combined, these additions to the algorithm do make it seem a little better than AMD's Privacy View feature in its Adrenalin software, which just dims the regions of the screen you're not looking at.

However, like all such eye-tracking security features, it doesn't seem to prevent one particular issue. If the original document is visible and legible on any part of the screen, there's always the chance that someone can see it. None of these software-based privacy systems can stop someone from taking an image of your screen from a distance or out of the eye-tracking device's field of view.

Still, the Microsoft system is, in my humble opinion, considerably better than laptop privacy screens, which just disable some of the backlights to make it harder to read content from the side. Of course, one could just not work on sensitive documents in a public area, but if it has to be done, I'd prefer the content to be genuinely encrypted in some way and not just a bit dimmer.

]]>
https://www.pcgamer.com/software/security/microsoft-patents-a-technique-to-display-encrypted-documents-so-only-you-can-see-them fwYbknaj736LHYZjt3Caud Fri, 05 Jul 2024 11:47:23 +0000
<![CDATA[ The internet is not a free-for-all—we shouldn't let big tech companies wish copyright out of existence ]]>
Jacob Ridley, senior hardware editor

Jacob Ridley headshot on a green background

(Image credit: Future)

This week: I just so happened to be listening to Mustafa Suleyman's book, The Coming Wave: AI, Power and the 21st Century's Greatest Dilemma. In which, the DeepMind co-founder goes into his thoughts on AI and the "technological revolution" he suggests has already begun.

When a generative AI system creates an image or some text, it all begins with training. Without an understanding of how words are statistically related to one another, or without knowledge of what an image is showing, a generative AI cannot successfully recreate it. The image generated by an AI might be a new work in itself, a complete original, though it's influenced by real works—millions of them—owned by millions of people.

How AI companies, or the firms which create datasets used by AI systems, continue to collect data is a source of much contention—an uncomfortable truth hanging over AI's exponential growth. Many AI firms have quietly assumed a position of acting as though they're allowed to use data freely from the web—be it images, videos, or text. Without this justification, they'd be stuck having to actually pay for the content they're using, threatening said growth. Meanwhile, artists, content creators, journalists, bloggers, producers, novelists, coders, developers, musicians, and many more argue that's absolute hogwash.

This split is best exemplified by comments made during a CNBC interview at Aspen Ideas Festival (via The Verge) by the CEO of Microsoft AI, Mustafa Suleyman.

Suleyman is at the centre of AI development today. Not only is he leading Microsoft's AI efforts, he co-founded DeepMind, which was later bought by Google, and drove Google's AI efforts, too. He's had a large part to play in how two of the largest tech firms on the planet deliver their AI systems. I've been listening to the audiobook of Suleyman's book this past few weeks, The Coming Wave, as he's someone informed and with a lot to say about how AI has and will impact our daily lives.

So, I say this with the utmost respect to a pioneer in his field: I believe his idea of a "social contract" for the internet is complete nonsense.

Suleyman, when asked by CNBC's Andrew Ross Sorkin on whether AI companies have "effectively stolen the world's IP", had this to say:

With respect to content that is already on the open web, the social contract of that content since the '90s has been that it is fair use.

Mustafa Suleyman, Microsoft

"It's a very fair argument. With respect to content that is already on the open web, the social contract of that content since the '90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding."

Except that isn't the understanding. At least not mine, anyways, and if you've been taking content freely from anywhere on the internet this whole time, I have some very bad news for you.

If we ignore the fact that freeware is already a thing and, no, not everything on the internet is freeware—just think of the ramifications for a moment if it were so, especially for Suleyman's own employer, Microsoft—there's further legalese to prevent a free-for-all online.

There's something called copyright, which here in the UK was enshrined into law through the Copyright, Designs and Patents Act 1988. As a journalist, I have to be very conscious of the right I have to use anything on the internet, otherwise I may (rightly) be forced to pay a very large sum of money to the copyright holder.

Let's not get too into the weeds with this (he says, not even halfway through a 2,000 word column), but generally copyright law covers "original literary, dramatic, musical, or artistic works." That includes all manner of text, too, not just novels or short stories, and lasts usually 70 years. The rights to which are initially assigned to the "first owner" or creator of that work.

Copyright is automatically applied, meaning someone need not register to get it, but only applies to original works.

Some argue the creations of generative AI are original works, and therefore qualify for automatic copyright. To whom you grant the automatic copyright is a tricky situation, as when animals have taken photos of themselves (search 'monkey selfie') our very human laws don't quite know what to make of it. We actually ended up with a ruling in 2014 by the United States Copyright office that states works by non-humans are not copyrightable (PDF). That's despite a human playing a pivotal role in setting up the entire thing—which could have implications for AI-generated art, and not the least bit because that same ruling applies similar constraints on works created by a computer.

A crudely draw monkey on a green background with the words 'Hail Satan' in the upper corner.

Here's a monkey. It was meant to be taking a selfie but hey ho—Andy Edser, 2024. (Image credit: Future)

Whether you own the copyright to the art you prompted through a generative AI system, even finessing those prompts to get it just right, is an ongoing debate. However, US courts currently rule against granting copyright in these instances, and have even barred award winning artwork from copyright.

But this is a tangent. Let's focus back on the use of copyrighted works for training purposes because clearly copyright has something to do with the mass collection and use of images, videos, and text, without permission, for an AI system likely run by a private business for commercial gain.

Within UK law, the copyright owner (automatically the author or creator, or employer of said author or creator) gets to say who can use its images and how. It's easy to waive your rights to images—I might see you've posted an image of a fun PC mod and message to ask if I have your permission to use it on PC Gamer, for example. If you say yes, providing I give you sufficient attribution for your work, everyone is happy and life moves on.

If I don't ask your permission and subsequently take the image or "substantial" part of it (which some do, no doubt about that), upon finding out that I've encroached on your copyright, you could demand I remove the offending material, sue for damages, or even get an injunction banning me from publishing or repeating an offence again.

This has been the case since the act was introduced in the UK in 1988—which I'd add was before the internet was a big deal. Similar protections also exist around the world, including the US and EU. 

So there's really no excuse for saying we've all been living in some kind of wild west where anything goes on the internet. It doesn't, AI companies just want that to be the case, and they are fighting to protect their own interests.

There are a few defences for taking copyrighted works without permission in UK law. These mostly come under something called fair dealing. Fair use in the US is a similar concept but different in practice and applicability—as a UK national, it's mostly fair dealing that covers my actions. There are a few versions of fair dealing: one covers reporting of current events , another for review or criticism, and quotations and parody are also covered. Unless AI is actually a big joke, that last one won't offer much of a defence.

The PC Gamer logo crudely drawn.

I can't be sued for copyright by my own company! Wait, can I?—Jacob Ridley, 2024 (Image credit: Future)

Neither will the rest. They don't cover photographs, for one, which are proactively defended in the law. They also require a user to not take unfair commercial advantage of the copyright owner's works and only using what's necessary for the defined purpose. They also frequently require sufficient acknowledgement—none of which is the done thing in generative AI.

The rights of some publishers to not share their content is something that Suleyman tends to agree is the case, and which has already been exploited, as he explained to CNBC (which, by the way, I can quote thanks to fair dealing):

That's a grey area and I think that's going to work its way through the courts.

Mustafa Suleyman, Microsoft

"There's a separate category where a website or a publisher or a news organisation had explicitly said do not scrape or crawl me for any other reason than indexing me so other people can find that content. That's a grey area and I think that's going to work its way through the courts."

"So far, some people have taken that information. I don't know who, who hasn't, but that's going to get litigated and I think that's rightly so."

Except that the one form of content that doesn't generally come under copyright law are actually news articles.

I'm frustrated by the moves from Google and Microsoft to use AI to summarise my articles into little regurgitated bites that threaten to destroy the business of the internet, but I wouldn't want to argue that's copyright infringement in court. It's known as "lifting" a story when you take key information from something published by another and republish it yourself. Providing you don't use the same words and layout—you don't take the piss, basically—it's legally fine to do under existing law.

Plenty of publishers will argue against AI systems on the finer points of these systems and what constitutes lifting and what's just taking without asking and without fair recompense—see the New York Times vs. Open AI case. I'll leave that to the lawyers. My argument is that, legal or not, an AI summarising stories with no kickback for the people working to create those stories will ultimately do a lot more harm than good in the long run.

Artificial intelligence drawn crudely with the words 'ooo, art' at the bottom right.

You can have this one for free, AI—Jacob Ridley, 2024 (Image credit: Future)

Simply put, I don't understand the argument from Suleyman here. Maybe it's a degree of wishful thinking from someone inside the AI inner circle looking out, or maybe he's looking around the internet and seeing some sort of wild west without any rules? But that's not the case, even considering the common exceptions to copyright law we'll get to in a moment. 

Copyright infringement happens all the time on the web, and it's a debasement of both our rights as creators to not have our stuff nicked and the value of the content itself. Does that mean we should just lay down, admit defeat, and let an AI system or dataset crawler rewrite the rules so that copyright need not apply to them? I don't think so.

AI, explained

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

There are some measures coming into place to try to defend copyright in a world obsessed by AI. The EU has introduced the Artificial intelligence (AI) Act which includes a transparency requirement for "publishing summaries of copyrighted data used in training" and rules on compliance with EU copyright law, much of which is similar to that of the UK.

Though the EU also includes some get-outs allowing for data mining of copyrighted works in some instances. One allows the use in research and by cultural heritage institutions, and the other means users can opt out of further use by other organisations, including for machine learning. How exactly one opts out is, uh, not entirely clear (PDF).

The UK has something similar in place, as an exception to the 1988 Act, which allows for non-commercial use of data mining. This is generally not considered a viable defence for large AI firms with public, and commercial, generative AI systems. The UK Government had also planned another exception, since the sudden popularity of AI systems, though that has since fallen through. That's probably to the benefit of people in the UK, who are technically safe from data mining for commercial purposes, but not for the AI firms hoping to scrape data from within the UK's borders.

The exact ways in which companies hope to circumvent these limitations or how these laws look in practice are matters that lawyers, civil servants and politicians will have to debate for years to come. Though generally I just want to make it clear that these arguments exist because of copyright law—not for a lack of it.

A crudely drawn data miner.

Speaks for itself—Jacob Ridley, 2024. (Image credit: Future)

By acting as though these rules don't apply to them, and putting pressure on governments to make allowances for AI due to the significant amount of money AI promises to deliver, AI firms have largely gotten away with it to-date. Though I'd hold they're mostly running on a strategy of "it's easier to ask for forgiveness than to ask for permission" and have been for a couple of years now. They might continue to get away with it, too. By the time we've got to grips with copyright claims and whether they even exist for AI, will it be possible to untrain AI systems already trained on datasets filled to the brim with copyrighted content? Oops, turns out we can't really do that very well.

"What a pity," the AI exec might say.

It's my take, as a person that creates for the internet and without any claims of being a copyright lawyer, that in the creation of any loopholes for the purposes of data mining we may end up with one rule for big AI firms and another for regular folk like you and me. The presumed benefits of AI generated art trained on the hard graft of your own creative work deemed too valuable to human existence to be held back by petty copyright infringement. It could feel like that, or we could hold the AI companies, some worth billions of dollars, to account for the copyrighted content they're benefiting from.

If copyright owners don't manage to fend off AI, what will become of the internet, or "open web", as we know it? Will an artist want to publish anything online? Will social media platforms arise with the promise to be 'AI-proof'? Will the internet become more siloed as a result, split off into smaller communities off the beaten track and away from the prying eyes of Google, Microsoft, and crawlers sent out by dataset companies?

Because, after all, it's not just my words in an article that an AI might look fondly upon, or even someone's publicly published artwork, but perhaps your wedding photos or your smutty fanfiction. And what then, of that AI generated advert for something that you don't agree with that looks like you, or sounds like you, of your ability to get it removed or your likeness untrained? Now, that sounds a lot more like the pandemonium free-for-all that Suleyman believes has been happening this entire time. 

]]>
https://www.pcgamer.com/software/ai/the-internet-is-not-a-free-for-all-for-ai YGDZjsBgmR9A9GgCSFviNn Wed, 03 Jul 2024 19:30:04 +0000
<![CDATA[ Google's dumb AI answers increased its greenhouse gas emissions by nearly 50% in the last 5 years ]]> As one of the largest and most influential tech firms in the world, Google has made various commitments about how the company will operate now and in the future. We're a long way from the innocent days of "don't be evil", and into the realms of more prosaic and potentially consequential stuff: such as the goal of reaching net zero emissions by the year 2030.

Google has now issued its 2024 environmental report, which euphemistically features a section called "AI for sustainability" and offers users the chance to sift through the report guided by AI. This is deeply ironic, because the take-home message of the report is that AI projects have put Google's emissions through the roof. The report reveals that the company's increased use of power-guzzling datacenters for AI research has seen its greenhouse gas emissions increase by 48% over the last five years.

Google blames its electricity consumption and wider supply chain for 2023's 14.3 million metric tons of emissions, a 13% rise over the 2022 figure. Bear in mind that Google's net zero target requires that it reduce emissions significantly. 

"Reaching net-zero emissions by 2030 is an extremely ambitious goal and we know it won't be easy," reads the report. "Our approach will continue to evolve and will require us to navigate significant uncertainty—including the uncertainty around the future environmental impact of AI, which is complex and difficult to predict."

Well… it seems pretty certain how AI is affecting the environment at this moment in time. On top of the above, Google goes on to basically throw its hands up. "Solutions for some key global challenges don't currently exist, and will depend heavily on the broader clean energy transition." Don't blame us, in other words, because no one's got nuclear fusion up and running yet.

The report goes on to claim, naturally, that AI will help solve the AI emissions problem by 2030: "AI has the potential to help mitigate 5-10% of global greenhouse gas (GHG) emissions by 2030." Google goes on to list things like route efficiency for traffic and how it can help operate traffic lights, which is all well and good but I'm not sure is dealing with the carbon elephant in the room.

There are, at least, those who agree that AI will eventually offset its own energy consumption. "Let's not go overboard on this," Bill Gates told journalists at a recent conference. "Datacentres are, in the most extreme case, a 6% addition [in energy demand] but probably only 2% to 2.5%. The question is, will AI accelerate a more than 6% reduction? And the answer is: certainly."

Google ends the report with a huge list of organisations it's working with and initiatives it's supported. These make for odd reading because, while I'm glad some of Google's endless cash has been spent on restoring "750 acres of monarch butterfly habitat across California", I'm not sure it acts as any kind of meaningful counter-weight to over 14 million metric tons of emissions.

Things are not going to get better, barring some unforeseen breakthrough. The International Energy Agency (IEA) estimates that the electricity usage of data centres is simply going to increase, and could double over the next two years to 1,000TWh (terawatt hours). This estimate is widely accepted by analysts, and some think that AI-driven consumption is pushing the world towards a global energy crisis. Another cheery stat? In 2022 Microsoft and Google's data centres used 32 billion liters of water. As they run faster, they run hotter. 

These numbers are unfathomably large, and show the utter commitment of big tech to pushing AI technologies forwards. It seems apt to wonder about what we're actually getting from this enormous use of resources, when so far the only real answers seem to be bad chatbots, slop imagery, and the never-ending promise this will all somehow change the world one day.

So we may end up cooking in our own skin at some point, but at least our AI companions will be able to advise on how best to alleviate the side-effects of impending death. Who knows, by 2030 they may even have a good answer.

]]>
https://www.pcgamer.com/software/ai/googles-dumb-ai-answers-increased-its-greenhouse-gas-emissions-by-nearly-50-in-the-last-5-years 2MqNQQhGyvXvMSrLAvVQbB Wed, 03 Jul 2024 18:49:24 +0000
<![CDATA[ Beware: The latest Windows 11 update might just trap you in a boot loop of virtualised doom ]]> Microsoft has confirmed what might initially seem to be a mildly, nay, arms-wavingly alarming issue with a recent Windows 11 preview build. Namely, installing it "might cause devices to restart repeatedly," requiring "recovery operations in order to restore normal use."

On deeper inspection, it’s actually not quite so alarming because this issue only seems to affect virtualised Windows environments that installed the optional June 26 KB5039302 update. This update is a preview build, which essentially means it’s most of July’s update packaged for an optional early download.

Thankfully, Microsoft caught the issue before the upcoming July 9 patch. If it had rolled out, some virtual Windows environments—presumably enterprise machines, for the most part—might have failed to start or repeatedly restarted. But Microsoft says the update "is now paused only for devices affected by the issue" and "might not be offered to Hyper-V virtual machines running on hosts that utilize certain processor types."

To be clear, it seems that it’s only an issue for virtualized Windows systems such as Microsoft Cloud PCs or Azure Virtual Desktops, or possibly localised virtual machines such as one you might run on VMWare or VirtualBox. So, don’t worry about your physical PC which has Hyper-V enabled, because it only seems to be a problem for any virtual machine running on that hypervisor.

If you do run Windows on a cloud PC or virtual machine and you’ve installed this update, you can revert and uninstall it from the Windows Recovery environment once Windows eventually plonks you into it after the many reboots. Just go to Troubleshoot > Advanced Options > Uninstall Updates > Uninstall latest quality update, then reboot when it prompts you to.

Thinking of upgrading?

Windows 11 Square logo

(Image credit: Microsoft)

Windows 11 review: What we think of the latest OS.
How to install Windows 11: Our guide to a secure install.
Windows 11 TPM requirement: Strict OS security.

For the non-virtualised Windows environments that will receive it, the KB5039302 update has a few nice improvements and fixes. Probably most relevant to PC gamers is the fix for an issue that prevented some GPUs from entering idle states when not in use. There’s also a fix for issues relating to high system CPU usage caused by the Windows Filtering Platform (WFP) driver, which is used by firewalls and so on.

So, more CPU performance freed up for pumping out the framez, and a smattering of GPU users seeing less power draw. What’s not to love? Oh, right, the boot loops. Sorry, virtual desktop users!

]]>
https://www.pcgamer.com/software/windows/beware-the-latest-windows-11-update-might-just-trap-you-in-a-boot-loop-of-virtualised-doom YoVid35xawVDjqeRfjgTC4 Wed, 03 Jul 2024 12:49:26 +0000
<![CDATA[ Xbox Live's 'major outage' is fixed, but we still don't know what went wrong ]]> Update: At around 9 pm ET, Microsoft announced that "users should no longer be encountering issues signing in to Xbox Live and services." The cause of the outage has not yet been revealed. 

(Image credit: Microsoft (Twitter))

Original story:

Microsoft says a "major outage" at Xbox Live is keeping some users from logging in, and it's taking longer than expected to get the problem fixed.

As noted by The Verge, the Xbox Support account on Twitter first acknowledged that "some users have been disconnected from Xbox Live" at around 3 pm ET. Just over an hour later, the account tweeted again to say "our investigation is taking longer than expected," and advised users having trouble to keep an eye on the Xbox Status page.

(Image credit: Microsoft (Twitter))

The nature of the problem, and whether it's the result of a DDoS attack or some other external influence, isn't clear. The status page currently indicates that all Xbox Live features and services are up and running except for "Account and profile," which is suffering a major outage that's keeping users from logging into Xbox Live.

"You may not be able to sign-in to your Xbox profile, may be disconnected while signed in, or have other related problems," the status page says. "Features that require sign-in like most games, apps and social activity won't be available."

I tried logging in and that is indeed what happened: After entering my login details, I was dropped to a blank screen, then kicked back to the not-logged-in status page. Oddly enough, I was able to report a problem, even though I'm not logged in, something some users on Twitter said they were unable to do—so there's clearly something weird going on.

(Image credit: Microsoft)

The most recent update to the status page says a "related issue" has been discovered and a resolution is pending. We'll keep an eye on things and update when the situation changes.

]]>
https://www.pcgamer.com/software/platforms/xbox-live-suffers-a-major-outage-thats-keeping-people-from-logging-in-microsoft-says-fixing-it-is-taking-longer-than-expected T83DFgVjJ9CvhhjuuhNysT Tue, 02 Jul 2024 21:23:05 +0000
<![CDATA[ The BBC dons its game dev cap to combine 2 things I fear and don't understand: The UK General Election and Roblox ]]> In the ultimate team-up of 'things I dread the societal consequences of', the UK General Election is coming to Roblox. It's courtesy of the BBC, and is meant to serve as "an engaging and fun way to learn more about the country’s election whilst interacting with [the BBC's] top political experts." Naturally, it's been set up to coincide with the upcoming election on July 4, the first one the UK has had since, let me see, three prime ministers ago.

In practice, what that means is that the BBC's Wonder Chase Roblox area—a big rambling space full of activities based on the Beeb's shows—has had its heart converted into a mockup of 10 Downing Street, presided over by the watchful eye of Larry the cat, the real-life chief mouser to the cabinet office. Also presided over, less regally, by real-life BBC journalists Laura Kuenssberg, Clive Myrie, and Jeremy Vine.

As you might imagine, it's all very much aimed at kids (or as the BBC refers to them, the "under 25 Roblox audience," which makes me feel older than sand) despite the presence of political heavyweights like Larry, and you should probably look elsewhere for a hard-hitting electoral breakdown ahead of this Thursday's election. Still, it's a cute and edu-taining way to try and get the youth interested in politics, even if—speaking as a former child—I doubt many of them will be that interested.

I actually went and had a stroll around the area myself. The BBC promises that, if you collect the hidden ballot boxes secreted around the zone, you can adopt Larry as your own pet in Roblox, which admittedly is the most tempting sell a videogame has ever offered me. I got a little confused, though: I found a couple of ballot boxes but, unlike other collectibles, they didn't just zap immaterially into my inventory. I had to pick them up and carry them about. I abandoned my quest after lugging a box back to Downing Street and throwing it directly at Larry's head, which accomplished nothing.

Anyway, if you want to adopt Larry for yourself, if you're someone from outside the UK who is mystifyingly interested in our baroque political process, or if you're just an actual child, you can find the BBC GE Roblox zone here

]]>
https://www.pcgamer.com/software/platforms/the-bbc-dons-its-game-dev-cap-to-combine-2-things-i-fear-and-dont-understand-the-uk-general-election-and-roblox GkhomXyfG7wEectUzX3NB8 Tue, 02 Jul 2024 16:57:21 +0000
<![CDATA[ Five new Steam games you probably missed (July 1, 2024) ]]>
Best of the best

Baldur's Gate 3 - Jaheira with a glowing green sword looks ready for battle

(Image credit: Larian Studios)

2024 games: Upcoming releases
Best PC games: All-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best MMOs: Massive worlds
Best RPGs: Grand adventures

On an average day about a dozen new games are released on Steam. And while we think that's a good thing, it can be understandably hard to keep up with. Potentially exciting gems are sure to be lost in the deluge of new things to play unless you sort through every single game that is released on Steam. So that’s exactly what we’ve done. If nothing catches your fancy this week, we've gathered the best PC games you can play right now and a running list of the 2024 games that are launching this year. 

Exorcist: Reviewer of Minds

Exorcist: Reviewer of Minds

(Image credit: 727 Not Hound)

Steam‌ ‌page‌ ‌
Release:‌ June 28
Developer:‌ 727 Not Hound

Here's another bizarre horror game in the style of Buckshot Roulette or Flathead. In other words, Exorcist: Reviewer of Minds is a simple replayable mini-game elevated by its disturbing atmosphere. The goal is to successfully exorcize a possessed child, which means researching which demon is responsible and then—ideally—expunging it via reciting its name. "Using information about the demon's origin and rank from the list of demons recorded in the holy texts, combine logical reasoning with a bit of luck to choose the demon's name," so reads the Steam description. As you'd expect, winning is ideal, because losing results in some pretty heavily—and grotesque—consequences.

Outbrk

Steam‌ ‌page‌ ‌
Release:‌ June 29
Developer:‌ Sublime

Remember the film Twister? This is a videogame about chasing storms, set in a 625 square kilometre open world "fictive reproduction of America's tornado alley". You're a professional storm chaser seeking "valuable" data, which means getting extremely close to very dangerous weather phenomenon, and ideally before any of your competitors can get there first. These quixotic efforts can be conducted with a group of friends online, and the better you get at it the more cash you'll have to upgrade your vehicle and kit it out with stormproof gear. Outbrk is an Early Access affair: it'll stay there for two years while the studio continues to improve and add to the game.

Street Uni X

Steam‌ ‌page‌ ‌
Release:‌ June 27
Developers:‌ daffodil & friends

Think Tony Hawk's Pro Skater or SSX, but instead of skate- or snowboards you're rocking around on a unicycle. Yes, that's right: unicycles are cool enough to receive the pre-millennial pop punk treatment of the ye olde PS1 / PS2 extreme sports classics too! Street Uni X is very much a love letter to that period, cribbing not only its attitude but its early 3D art style as well. Unicycle enthusiasts: your time has come. This looks like a lot of tricky fun.

SpyXAnya: Operation Memories

Steam‌ ‌page‌ ‌
Release:‌ June 28
Developer:‌ GrooveBoxJapan

Based on the Spy x Family anime series, SpyXAnya follows the efforts of a kid protagonist—the eponymous Anya—tasked with filling out a photo diary of memories. She'll visit over a dozen locations, talk to a bunch of whimsical characters, take a lot of photos, partake in many mini-games, and all the while a narrative will unfold. I'm pretty sure this is aimed squarely at kids, but who knows? If you know a young'un who's a fan of the anime, this looks like some cheerful, harmless fun.

Frogun Encore

Steam‌ ‌page‌ ‌
Release:‌ June 26
Developer:‌ Molegato

Frogun Encore is a standalone expansion for the well-received 2022 3D platformer. It doesn't mess with the formula save for the introduction of local cooperative play, which is always an extremely welcome feature in a cosy 3D platformer. Aside from that, expect some new movements from the protagonist(s), a bunch of tough bosses, and gorgeous Nintendo 64 inspired graphics.

]]>
https://www.pcgamer.com/software/platforms/five-new-steam-games-you-probably-missed-july-1-2024 gZirPdqXuHEUrTFVgRUds3 Mon, 01 Jul 2024 01:57:43 +0000