Welcome to WarBulletin - your new best friend in the world of gaming. We're all about bringing you the hottest updates and juicy insights from across the gaming universe. Are you into epic RPG adventures or fast-paced eSports? We've got you covered with the latest scoop on everything from next-level PC gaming rigs to the coolest game releases. But hey, we're more than just news! Ever wondered what goes on behind the scenes of your favorite games? We're talking exclusive interviews with the brains behind the games, fresh off-the-press photos and videos straight from gaming conventions, and, of course, breaking news that you just can't miss. We know you love gaming 24/7, and that's why we're here round the clock, updating you on all things gaming. Whether it's the lowdown on a new patch or the buzz about the next big gaming celeb, we're on it.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

China is dispatching crack teams of AI interrogators to make sure its corporations' chatbots are upholding 'core socialist values'

If I were to ask you what core values were embodied in western AI, what would you tell me? Unorthodox pizza technique? Annihilating the actually good parts of copyright law? The resurrection of the dead and the life of the world to come?

All of the above, perhaps, and all subordinated to the paramount value that is lining the pockets of tech shareholders. Not so in China, apparently, where AI bots created by some of the country's biggest corporations are being subjected to a battery of tests to ensure compliance with «core socialist values,» as reported by the FT.

China's Cyberspace Administration Centre (CAC)—the one with the throwback revolutionary-style anthem which, you have to admit, goes hard—is reviewing AI models developed by behemoths like ByteDance (the TikTok company) and AliBaba to ensure they comply with the country's censorship rules.

Per «multiple people involved with the process,» says the FT, squads of cybersecurity officials are turning up at AI firm offices and interrogating their large language models, hitting them with a gamut of questions about politically sensitive topics to ensure they don't go wildly off-script.

What counts as a politically sensitive topic? All the stuff you'd expect. Questions about the Tiananmen Square massacre, internet memes mocking Chinese president Xi Jinping, and anything else featuring keywords pertaining to subjects that risk «undermining national unity» and «subversion of state power».

Sounds simple enough, but AI bots can be difficult to wrangle (one Beijing AI employee told the FT they were «very, very uninhibited,» and I can only imagine them wincing while saying that), and the officials from the CAC aren't always clear about explaining why a bot has failed its tests. To make things trickier, the authorities don't want AI to just avoid politics altogether—even sensitive topics—on top of which they demand bots reject no more than 5% of queries put to them.

The result is a patchwork response to the restrictions by

Read more on pcgamer.com