AI 6 months later cover

AI, 6 months later

At the start of 2023, the AI race reached a point of pop culture with powerful language, audio and visual models being released to the public in a form that makes them very accessible to everyday people.

With services such as ChatGPT, you don't need to train and fine-tune your own AI, you can use it by just making an account and you have the full power at your fingertips to generate whole articles, summarise long information, problem solve* (with a massive asterisk) and give you information in a digestible form that traditional search engines like Google simply can't do.

With this gold rush, similar to the NFT gold rush before it and many others, we've actually seen an up-tick in "<insert-generic-software> with AI" apps release, but in our experience there's a catch to the power of these AIs and to the efficacy of using them.

So we're going to do a quick retrospective on the reality of the AIs.

They're bad at problem solving

Last week we tweeted an example of ChatGPT failing a rather simple logical test with the instructions:

"Write a list of numbers from 1 to 30, where every character '3' is replaced with '4' and every character '4' is replaced with '3', including numbers such as 13 and 30"

failing spectacularly.

Screenshot of ChatGPT failing a logic test

And using these AIs for various or unique complexities that we've come across in programming, for one, is simply only good as a form of reflection rather than a tool that can reliably solve your problems. We've seen plenty of examples of generated code that technically does work but it's of a level of quality so poor you shouldn't use it in production with some basic issues such as using deprecated methods or packages, code containing illogical resolutions and sometimes it just outright doesn't work.

This means it's up to you, the engineer, to understand its output and correct it. These issues, arise from the fact that these services don't actually understand the meaning behind what it's learnt from, it's not quite close to AGI (Artificial General Intelligence) that promises a far more human-like mind that can learn one concept and apply its learnings to other things. In short, it's somewhat bad at extrapolating from its dataset.

If you were worried that AIs will put programmers and designers out of a job, that's absolutely not the case yet; and, perhaps in our optimistic outlook, even when these AI models become dramatically better we believe that humans will still be required for many roles and their efficiency will just be improved.

Can I use it for web design?

Haha, no.

Screenshot of microsoft designer's AI generated landing page design, it's really bad

Not yet, at least. The output of software such as Microsoft's Designer and various other web design AIs that we've tried with Figma, have had really poor output with nothing worthwhile beyond powerpoint slides.

Midjourney does put out some decent looking designs however they are just low resolution JPEGs making it less useful if you want a collaborator working alongside you in Figma. However you can definitely use it for inspiration if nothing else, you will notice it doesn't even know that it's supposed to be outputting clear text, it simply learns entirely visually without any awareness of the content it's supposed to display.

Screenshot of Midjourney output

Fails at serving non english-speaking users equally well

Let's not get off the wrong foot here, they are pretty language-aware and translation AI models are the better tools for translating phrases or long form text. However if you are using, for example, ChatGPT as a non-english speaker, you won't have the same experience as using it in english.

It simply assumes most users are english speakers, and perhaps it comes from the amount of resources available in english first and foremost.

However we're pretty lax on the criticism in this area because we'll likely see huge improvements in this area as companies have an incentive to make their products usable by the most users as possible worldwide, it's still a concern that some people not be able to use these tools to assist in their learning as much as others.

What is it good for then?

In the context of programming, it's really good for quickly giving you summarised information from the documentation of a library that may otherwise be confusing with personalised code examples.

We use it a lot for rubber ducking code problems and to generate a lot of generic or repetitive code. It's pretty good at doing the 90% of what most people have done, meaning you can focus your time on writing the extra 10% that adds the most value to your product, a good example of that is generating type interfaces in Typescript.

Currently it's also really good at conceptualising art or designs, you will likely need to do a few iterations and perhaps mix in other tools too, but you can quickly generate a concept for an idea or a vision to gain a better perspective of it. Something that previously would take long gruelling hours by designers and artists.

And of course, outside our sphere of technology, it's been used to generate music and mimic artists pretty successfully, as well as creating deep fakes.

What tools do you currently use?

Search engine - Phind.com has been pretty great for us so far and as of the time of writing it has a pretty generous free tier of tokens you can use for GPT 4 as well, it provides you with search results and sources too

Code generation - We're still a bit undecided actually, we will come back after more testing between Replit and Copilot, the others we've tried haven't been so good

Web design - Currently we can't recommend anything, perhaps Midjourney is the best tool right now to generate some concepts

Misc - For anything else we open up ChatGPT, since it is the source API of other tools

Aside from that we regularly use AI features within existing software such as Notion or chat applications.

It's here to stay

Needless to say, it's here to stay and we didn't cover all the cool things it can do simply because all of it is in the pop culture of memes on the internet, though if you need another hit here is Anthony Fu making AI art QR code.

It won't steal your job just yet, since somebody still needs to verify and know how to prompt these AIs for a good output, there's still a human side to it. Though it can remove a lot of repetitive work that was previously done by humans, so on the lower level it does pose risks to employment, one best solved by stronger legislation than technologically.

The other remaining criticism, through no fault of the tech itself, is the amount of generic apps with AI slapped on it as we've seen it happen before with Blockchain, NFTs and other tech trends. The hype has also slowed down a little bit as people are reaching the limits of what these tools can do.

The future of AI development is only up from here, so collectively we need to figure out how to make the best use of it. These tools are fantastic to aid you in discovery and learning and in the face of search engines being SEO'd to the point of being a nuisance when just trying to get information as quickly as possible, they're an important asset in an engineer's toolset.