JULY - 20239 Even with safety guards built in place, the information output can be manipulated in a way that would be less desirable or in some cases wreak havoc simply by how the inquiry is presentedcoherent way giving even the likes of Google Assistant, Alexa, and Siri a run for their money. It can create reams of text that are clear, plausible, and can be expanded upon in a way that can further improve understanding of a subject. This is also its downfall. Even with safety guards built in place, the information output can be manipulated in a way that would be less desirable or in some cases wreak havoc simply by how the inquiry is presented. Take for instance the construction industry if a chatbot was allowed to generate a legal contract or specifications for a building. Who checks the validity of information and to what degree of accuracy is there? The information produced by the chatbot can present very different information given who and how the questions are asked. Is this a knowledgeable licensed professional or an inexperienced layperson who asks just enough questions of the AI to generate a plausible response? Then imagine that response being presented to other laypeople, who may not have the expertise to know the information presented is flawed. What is the information generated by that layperson, say an intern, who hands it off to the professional in which it isn't really reviewed thoroughly, then handed off to a client who is also a novice? This is not to say that human written information is infallible, just that the back check may make it more difficult to determine the origins of authorship. This is not too dissimilar when CAD programs allowed designers to simply copy and paste a detail without always checking for accuracy.Applications like ChatGPT and even efforts by Open AI and other chatbot developers are an indication that there is a real problem here. We have seen this in other AI-based applications beyond the written text or spoken word in what are known as "deep-fake" videos where AI can take some images of a person and provide a fairly realistic fake video of a real person. Great for bringing back long-deceased actors for films such as Star Wars, but really troubling when it comes to geopolitical leaders being spoofed. In general, the power of these systems can be extremely useful but need to be taken with a real sense of caution and skepticism.
< Page 8 | Page 10 >