In November of 2022, artificial intelligence research company OpenAI released ChatGPT, an AI-powered chatbot. Over the course of the last year, ChatGPT has continued to improve its skills and grow in popularity. The release of ChatGPT reignited the already controversial debate surrounding the ethics of artificial intelligence at the time of its release. In October of 2023, OpenAI enabled image generation within ChatGPT, meaning that the AI software can now produce images based off of a simple written prompt. And now, just earlier this month, OpenAI announced Sora, which has the ability to generate realistic videos based off of written prompts.
Some previously generated videos can be seen on OpenAI’s website and in this YouTube video. These videos have sparked serious debate about the implications and potential impact of this technology. Just in January of this year, AI-generated sexually explicit images of famous musician Taylor Swift were posted onto various social media platforms. This caused outrage for many and ignited the conversation on what we should allow AI to be used for. Now, less than a month after that incident, we have been given Sora, which will have the ability to create completely fake videos aside from the images and texts that could already be created by ChatGPT. How much farther should these AI programs be allowed to go?
Additionally, many are wondering what this technological development will mean for many businesses, especially creative jobs and industries like acting, painting, photography, content creation, etc. Will Sora end up stealing jobs from videographers? Will organizations like advertising agencies or marketing firms fire their employees and use AI software instead of real people? Or will companies turn to AI instead of hiring a marketing or advertising firm in the first place? This worry may be even worse if using AI proves to be less expensive than hiring real people, as we have already seen recently. Remember, though, that AI is derivative – it can only reproduce things it has ‘seen’ or ‘read’ previously – it does not have the ability to ‘think outside the box’ to come up with something completely fresh and new in terms of creativity. So people will always have their place – it will no doubt be different and affected by the use of AI, though.
Although scientists have previously discovered that the human brain can usually detect AI-generated images, these programs have gone through immense bouts of improvement in a very short period of time. Some of the example videos that were generated by OpenAI to show off Sora are almost completely indistinguishable from a real video. But, there are still many that are easy to spot. As of now, AI tends to have trouble with eyes and hands, as well as some physics issues as well. A good example of the hand and eye issues can be seen in the OpenAI example video in which an elderly woman is blowing out her birthday candles. Physics issues can be seen in many of the other videos, one which displays a floating chair and one where wolf pups seem to appear out of nowhere.
A few examples of videos that may be hard to identify as AI-generated are OpenAI’s videos that show camera footage of an art gallery and golden retriever puppies playing in the snow. All of these videos can be found on OpenAI’s website. If you look closely, you may be able to find some signs. But upon first glance, these videos look very real. It will be interesting to see what comes of this technology, like what limitations will be put in place and what uses will be allowed. Although some are excited for Sora’s release to the public, others think it shouldn’t be released at all. What do you think?
Read our previous post here: Encouraging Incident Reporting For Employees