The Informatics Hub at the University of Sydney just turned the tables on what it means for researchers to have a play with generative AI.
You know, that crazy-ass tech that writes code like it’s painting the Mona Lisa while you code at 3 G’s…faster than a double-shot espresso on Monday morning.
And, according to the university’s very own release about it, they’re not only using GenAI – they are teaching an entire research community to wield it with expertise and ethics.
I spoke with a couple of researchers who have been to the workshops, and the buzz is electric. Imagine it: 1,200 people in Australia trudging through ethical model training and bias avoidance – with some fun mixed in.
The energy here does not resemble a startup garage every day, but some days. And if you think this is all just to write papers faster, well, some are dipping into health data; predictive genomics and even digital accessibility.
It’s reminiscent of what’s going in Europe, where for example ETH Zurich has been delving into A.I. ethics and policy frameworks. Different continent, same creative storm.
What is interesting is that Sydney’s initiative threads ethical guardrails right through training – like the rails at an Indy car race. Fast, but safe.
And on the subject of speed, the new Sydney Research Cloud launching in 2020 will boast a bespoke GPU cluster for AI workloads. That’s massive.
It places them alongside global players, such as the Pawsey Supercomputing Research Centre that already burns a small fortune of Australians’ cash to house some of our mightiest AI hardware. Think about the experiments researchers could pull off with that kind of horsepower.
And, of course, no AI narrative would be worth its salt without a bit of drama. So what happens when creativity encounters compliance?
From data sovereignty laws to human consent frameworks, researchers are navigating a tightrope.
Dr Darya Vanichkina, who leads Sydney’s Data Science Group, insists on balance: responsibility with out paralysis.
And honestly, that feels refreshing. We need more like her who consider ethics a design feature and not an obstacle course.
This dovetails globally with projects like Stanford’s Center for Research on Foundation Models, which also aims to build safe and transparent AI systems.
I’m not so sure we will have the resources to do this, but the approach Sydney takes feels more practical; possibly because it is linked with health and environmental sciences. It’s AI in the muck – real, messy, human.
And here’s what no press release will say aloud: This movement is not just about technology. It’s about individuals coming to terms with their new digital selves.
At a dinner party, when a PhD student said to me: “AI didn’t replace what I do – it gave me back my curiosity,” it made me grin.
That’s the story. Machines thinking like humans isn’t interesting; machines making us act more like the humans we are – that’s exciting!
The issue is whether they can keep this momentum going as the field changes at lightning speed.
If the answer is even just partly yes, then what the Sydney Informatics Hub is doing isn’t only teaching AI, it’s teaching the future.


