GranPC a day ago

> We will offer a free (rate-limited) service that everyone can use, once we have sorted out the legal issues regarding the possibility of mixing code snippets originating from open-source projects with different licenses (e.g., GPL-licensed tests will simply refuse to pass BSD-licensed code snippets).

Well, looks like they sorted em out!

  • siva7 a day ago

    > We are pleased to announce the Real TDD, our latest innovation in the Program Synthesis field, where you write only the tests and have the computer write the code for you!

    Boy would they only know 10 years later you don't even need to write tests anymore. Must feel like Sci-fi timeline if you warped one of these blog authors into our future

    • bathtub365 a day ago

      Now we can simply sit back and assume the computer is doing a good job while we fold laundry

      • protonbob a day ago

        Man. I wish the computer did the laundry and let me do the coding. What happened here?

        • laxd a day ago

          It's called washing machines. They come with a computer built-in.

          • ericghildyal a day ago

            Mine still makes me figure out what's in the machine and fold it after, am I due for an upgrade?

NitpickLawyer a day ago

To put things into perspective: DeepMind was founded in 2010, bought by goog in 2014, the year of this "prank". 11 years later and ... here we are.

Also, a look at how our expectations / goalposts are moving. In 2010, one of the first "presentations" given at Deepmind by Hassabis, had a few slides on AGI (from the movie/documentary "The Thinking Game"):

Quote from Shane Legg: "Our mission was to build an AGI - an artificial general intelligence, and so that means that we need a system which is general - it doesn't learn to do one specific thing. That's really key part of human intelligence, learn to do many many things".

Quote from Hassabis: "So, what is our mission? We summarise it as <Build the world's first general learning machine>. So we always stress the word general and learning here the key things."

And the key slide (that I think cements the difference between what AGI stood for then, vs. now):

AI - one task vs. AGI - many tasks

at human level intelligence.

----

I'm pretty sure that if we go by that definition, we're already there. I wish I'd have a magic time traveling machine, to see Legg and Hassabis in front of gemini2.5/o3/whatever top model today, trained on "next token prediction" and performing on so many different levels - gold at IMO, gold at IoI, playing chess, writing code, debugging code, "solving" NLP, etc. I'm curious if they'd think the same.

But having a slow ramp up, seeing small models get bigger, getting to play with gpt2, then gpt3, then chatgpt, I think it has changed our expectations and our views on what is truly AGI. And there's a bit of that famous quote "AI is everything that hasn't been done before"...

  • kartoffelsaft a day ago

    I don't think what we have now fits that definition. LLMs are still narrowly good at language generation, and the "many" things it's good at are things that have canonical textual / linguistic representations (code, chess notation, etc.). Much of existing AI that appears more general is hooking up more specific models together; for example, taking the output of an LLM and piping it into a TTS . Since these pieces are easily replaceable I struggle to call it one AI that can do many tasks.

    Consider that LLM->TTS example's human equivalent: when you're talking, you naturally emphasize certain words, and part of that is knowing not just what you want to say but why you want to say it. If you had a machine learning model where the speech module had insight into why the language model picked the words it has, and also vision so it knows who it's talking to to pick the right tone, and also the motor system had access to that too for gesturing, etc. then at that point you'd have a single AI that was indeed generally solving a large variety of tasks. We have a little bit of that for some domains but as it stands most of what we have are lots of specific models that we've got talking to each other and falling a little short of human level when the interface between them is incomplete.

  • bitwize a day ago

    Back in the 90s, Pixar put out a joke SIGGRAPH paper about rendering food with lots of food-related puns and so forth. In 2007 they released Ratatouille, which required them to actually develop new rendering techniques, especially around subsurface scattering, to make food look realistic and delicious.

Kuraj a day ago

If I didn't read past the concept and the date I would've accepted it as real without a blink of an eye

  • hinkley a day ago

    It probably could though. Or at least to the extent that declarative languages ever really work for real world problems.

    But iif you perfected it then it would also be the thing that actually kills software development. Because if I told you your whole job is now writing tests, you’d find another job.

    • nemomarx a day ago

      Isn't this project management, kinda? writing requirements and acceptance criteria and broad designs to hand off to a dev

      • hinkley a day ago

        Not any manager I've ever worked with. Including the good ones (but especially not the bad ones).

        Their job is to make sure that the business people and the devs sort it out without coming to blows. When they do work like this it's generally as a template to be copied, not the entire project.

    • lazyasciiart a day ago

      Not that long ago that was a literal job for some software engineers. Whole departments of them.

      • hinkley a day ago

        I love a quality QA engineer.

        But the only people who write code as bad as QA folks do are the DevOps people.

        The paradox of SDETs is: QA makes less than dev, no matter what flavor. If you're good at poking holes in developer logic, and you can code yourself, there's a 40-60% raise for you if you can switch into security consulting, which takes the same foundational skills and some reading.

        So there are at least two brain drains for "good coder in test", and we aren't even the most lucrative one.

jasoneckert a day ago

The best part of April Fools jokes are that they capture the spirit of the time.

I remember the Thinkgeek PC EZ-Bake Oven that fit into a 5.25" bay in your PC - fitting for 2004! https://hoaxes.org/af_database/permalink/pc_ez-bake_oven

And my favourite: Microsoft's Alpine Legend for Xbox 360 in 2009 that caused a stir because so many people actually wanted that game to be real. https://www.youtube.com/watch?v=ZUBQknWUEYU

  • noiv a day ago

    Well, in 1957, BBC Panorama aired a 3min segment how Swiss farmers harvest spaghetti from trees.

seanmcdirmid a day ago

We aren't really far off from that, perhaps.

  • hnuser123456 a day ago

    We're beyond that, now we can vibecode both the tests and the implementation.

    • benreesman a day ago

      It's always been possible to vibe code, it's just really fast now!

      I've done slipshod work full of bugs and security problems and thrown it over the fence hoping it will stand up long enough to be someone else's problems like 20 years ago!

      • overfeed 17 hours ago

        There used to be near-universal derision for people who'd copy-paste a patchwork of code from StackOverflow, but now its almost fashionable to let an AI do it for you.

    • seanmcdirmid a day ago

      I've been thinking about this a lot and we don't really do tests right. But if we did, ya, maybe we could just vibe code an entire system (the AI would have to run tests and fix things if it didn't work out).

jessekv a day ago

> We once saw a comment in the generated code that said "I need some coffee".

deterministic 16 hours ago

This nowadays sounds more like a product announcement than a joke.

Coding tests (if done correctly) is basically defining the behaviour of a black box API using running code. So it is easy to imagine an AI generating the black box from the tests/behaviour spec.

outside1234 a day ago

They knew the future in 2014 and somehow wasted 10 years

  • overfeed 17 hours ago

    They had to invent transformers first.