Have you ever wanted to learn how to program but may not have wanted to do it alone but also may not have had the chance to take a class? Then you are in a very unique time for learners of computing and programming (among many other types).
The influx of AI chatbots means that you can take on the cyber landscape or the inner working of your computer with one more helper than before. OpenAI’s ChatGPT-4 and Bing AI equipped with ChatGPT have been very much dotted upon both by the directly impacted industries as well as the wider public.
Part of what captivated users was that ChatGPT had added functionality in the form of being able to help you write, simulate, correct, suggest, explain, and perform many other functions for your existing code (or even general programming needs).
But as of last week, Google’s freshly minted AI chatbot, Google Bard, can now assist with your coding and programming queries also. This is one of the most hotly requested services for Bard and has been the biggest development for the AI chatbot since its launch.
In a company blog post, Bard is posited as being able to assist with a wide variety of specific tasks in all sorts of programming languages and much more. It is specifically intended to be an AI-fuelled collaborator that can help with programming and software development needs such as “code generation, debugging, and code explanation.” These capabilities will be available in more than 20 programming languages, including the most commonly used ones such as Python, Java, and C++.
Helpful, but in need of guidance
Now, similarly to how Bing and ChatGPT provide disclaimers, Bard initiates you to its interface with a reminder that “Bard is an experiment” and “will not always get it right,” and, as if with a wink, reminds you that it is worth checking its output by “Googling it.”
This is because, like ChatGPT, it is a large language model (LLM). LLMs are large conceptual models that are trained on large sets of data, namely in the form of code and text. What both of these LLMs do not have, however, is any type of core knowledge index to verify whatever it produces against, so it is worth double checking – both that it makes sense for the purpose you are using it for and that it is actually factual according to some authority.
We are in an interesting era for many reasons, but another besides being comprehensive and coherent assistant-esque tools is that ethical considerations in all sorts of disciplines are materializing in a much more visceral matter than ever before.
Schools and academic institutions have already voiced concerns and raised the need for detection software for AI-composed work. The Verge reported that in January, an AI conference, The International Conference on Machine Learning (ICML), banned academic papers drafted by ChatGPT and other AI language tools of the sort altogether, but did say that they could be used to help ‘edit’ and ‘polish’ authors’ works.
James Vincent, author of the Verge article, asks an interesting question about how much of this should be allowed before you could call a paper “co-authored” – or even “written” – by one of these bots. The conference’s website published an elaboration and stated that it would operate “on the principle of being conservative with respect to guarding against potential issues of using LLMs.”
I imagine that this means that the amount and type of AI-generated material are quite closely examined, and I would speculate that the conference went on a case-by-case basis. So, there are plenty of grey areas left, and I would surmise that this will compel more defined and more widespread guidelines in the future – because that certainly needs to happen.