By April Lanux
"It's easier to get forgiveness than permission," says John, a software engineer at a financial services technology company. "Just get on with it. And if you get in trouble later, then clear it up."
He's one of the many people who are using their own AI tools at work, without the permission of their IT division (which is why we are not using John's full name).
According to a survey by Software AG, half of all knowledge workers use personal AI tools.
The research defines knowledge workers as "those who primarily work at a desk or computer".
For some it's because their IT team doesn't offer AI tools, while others said they wanted their own choice of tools.
John's company provides GitHub Copilot for AI-supported software development, but he prefers Cursor.
"It's largely a glorified autocomplete, but it is very good," he says. "It completes 15 lines at a time, and then you look over it and say, 'yes, that's what I would've typed'. It frees you up. You feel more fluent."
His unauthorised use isn't violating a policy, it's just easier than risking a lengthy approvals process, he says. "I'm too lazy and well paid to chase up the expenses," he adds.
John recommends that companies stay flexible in their choice of AI tools. "I've been telling people at work not to renew team licences for a year at a time because in three months the whole landscape changes," he says. "Everybody's going to want to do something different and will feel trapped by the sunk cost."
The recent release of DeepSeek, a freely available AI model from China, is only likely to expand the AI options.
Peter (not his real name) is a product manager at a data storage company, which offers its people the Google Gemini AI chatbot.
External AI tools are banned but Peter uses ChatGPT through search tool Kagi. He finds the biggest benefit of AI comes from challenging his thinking when he asks the chatbot to respond to his plans from different customer perspectives.
"The AI is not so much giving you answers, as giving you a sparring partner," he says. "As a product manager, you have a lot of responsibility and don't have a lot of good outlets to discuss strategy openly. These tools allow that in an unfettered and unlimited capacity."
The version of ChatGPT he uses (4o) can analyse video. "You can get summaries of competitors' videos and have a whole conversation [with the AI tool] about the points in the videos and how they overlap with your own products."
In a 10-minute ChatGPT conversation he can review material that would take two or three hours watching the videos.
He estimates that his increased productivity is equivalent to the company getting a third of an additional person working for free.
He's not sure why the company has banned external AI. "I think it's a control thing," he says. "Companies want to have a say in what tools their employees use. It's a new frontier of IT and they just want to be conservative."
The use of unauthorized AI applications is sometimes called 'shadow AI'. It's a more specific version of 'shadow IT', which is when someone uses software or services the IT department hasn't approved.
Harmonic Security helps to identify shadow AI and to prevent corporate data being entered into AI tools inappropriately.
It is tracking more than 10,000 AI apps and has seen more than 5,000 of them in use.
These include custom versions of ChatGPT and business software that has added AI features, such as communications tool Slack.
However popular it is, shadow AI comes with risks.
Modern AI tools are built by digesting huge amounts of information, in a process called training.
Around 30% of the applications Harmonic Security has seen being used train using information entered by the user.
That means the user's information becomes part of the AI tool and could be output to other users in the future.
Companies may be concerned about their trade secrets being exposed by the AI tool's answers, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks that's unlikely. "It's pretty hard to get the data straight out of these [AI tools]," he says.
However, firms will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches.
It will be hard for companies to fight against the use of AI tools, as they can be extremely useful, particularly for younger workers.
"[AI] allows you to cram five years' experience into 30 seconds of prompt engineering," says Simon Haighton-Williams, CEO at The Adaptavist Group, a UK-based software services group.
"It doesn't wholly replace [experience], but it's a good leg up in the same way that having a good encyclopaedia or a calculator lets you do things that you couldn't have done without those tools."
What would he say to companies that discover they have shadow AI use?
"Welcome to the club. I think probably everybody does. Be patient and understand what people are using and why, and figure out how you can embrace it and manage it rather than demand it's shut off. You don't want to be left behind as the organization that hasn't [adopted AI]."
Trimble provides software and hardware to manage data about the built environment. To help its employees use AI safely, the company created Trimble Assistant. It's an internal AI tool based on the same AI models that are used in ChatGPT.
Employees can consult Trimble Assistant for a wide range of applications, including product development, customer support and market research. For software developers, the company provides GitHub Copilot.
Karoliina Torttila is director of AI at Trimble. "I encourage everybody to go and explore all kinds of tools in their personal life, but recognise that their professional life is a different space and there are some safeguards and considerations there," she says.
The company encourages employees to explore new AI models and applications online.
"This brings us to a skill we're all forced to develop: We have to be able to understand what is sensitive data," she says.
"There are places where you would not put your medical information and you have to be able to make those type of judgement calls [for work data, too]."
Employees' experience using AI at home and for personal projects can shape company policy as AI tools evolve, she believes.
There needs to be a "constant dialogue about what tools serve us the best", she says.
Parents will be able to block their children from specific games and experiences on Roblox as part of new safety measures announced by the hugely popular gaming platform.
They will also be able to block or report their children's friends, and the platform will provide more information about which games young users are playing.
The measures will only apply to children who are under the age of 13 and have parental controls set up on their accounts.
The announcements comes after the CEO of Roblox, Dave Baszucki told the BBC that parents should keep their children off the platform if they were "not comfortable" with it.
Roblox - the most popular site in the UK for gamers aged eight to 12 - has been dogged by claims that some children are being exposed to explicit or harmful content through its games.
However, in his BBC interview, Mr Baszucki stressed that the company was vigilant about protecting its users, with "tens of millions" of people having "amazing" experiences on Roblox.
Announcing the latest safety features, Roblox's Chief Safety Officer Matt Kaufman said: "These tools, features, and innovations reflect our mission to make Roblox the safest and most civil online platform in the world."
A spokesperson for the regulator, Ofcom, said the measures were "encouraging", but added "tech companies will have to do a lot more in the coming months to protect children online".
'Shoot down planes'
In preparation for the interview with Mr Baszucki, the BBC found a range of game titles with troubling titles that had been recommended to an 11-year-old on the platform.
They included games such as "Late Night Boys And Girls Club RP" and "Shoot down planes…because why not?"
Parents with linked accounts of children who are 12 and under can now block such titles if they are uncomfortable with them.
They will also be able to go further in managing who their children are friends with.
They can already view their child's friends list - now they can block or report people on that list, preventing them exchanging direct messages.
Messaging between children had already been restricted in measures announced in November last year.
Additionally, parents will now be able to see the top games their child played on Roblox over the last week and how long they spent in each one.
What do parents think?
Sally, from the north of Scotland, told the BBC last month that her nine-year-old daughter was groomed on the platform in December last year. Despite reporting it to Roblox, she never received a response.
She welcomed the announcements as a "start", but said Roblox "needs to do better".
She added: "What's missing is proper authentication of users. How does the company know that users are who they say they are - how will perpetrators be traced when grooming keeps happening?"
Roblox highlighted to BBC News its community standards, which have a zero-tolerance policy for the exploitation of minors.
Amir from Leeds told the BBC last month that his 15-year-old son is "addicted" to Roblox, and can use the site for up to 14 hours a day.
He has welcomed the changes announced today for younger users, but wants the platform itself to do more and target the availability of inappropriate games for children.
Kathryn Foley's nine-year-old daughter Helene is a regular on Roblox. Kathryn ensures her daughter avoids games where other players would talk to her, or friend requests.
Ms Foley told BBC News: "I know I will absolutely be using the game blocking feature, and to see how long my daughter spends on particular games - and also if she is playing games I didn't know she played."
Kirsty Solman has spoken with the BBC about how Roblox has helped her 13-year-old son Kyle - who has ADHD, autism and severe anxiety - with social interactions.
She said: "These all sound fantastic especially the experience blocking, as a concern is the type of games our children are accessing."
Roblox has also announced the expansion of its voice safety AI (artificial intelligence) model, to help moderate voice chats between players, with the feature now available in seven additional languages.
Meanwhile, Roblox has outlined changes to its advertising model, with players to be paid in-game currency to watch adverts on the platform.
12 CST | March 5
12 CST | March 5
18 CST | March 4
Get The Latest News From Us Sent To Your Inbox.


