What are some ethical implications of Large Language models?
![]() |
Ethical consideration in LLMs |
In this article, we're going to address the ethical challenges surrounding the usage of large language models, including potential biases and privacy issues.
Bias in AI system:
AI systems, particularly, will deliver biased results. There's no other way to get around it. Search engine technology, for example. It's not neutral, processes a large amount of data, and prioritizes results based on clicks and user preferences. So, a search engine can become essentially an echo chamber. And you can see that it will uphold certain biases of the real world, potentially because looking at the clicks and the user preferences of a majority of users using that search engine. And this is how we can come up with prejudices and stereotypes that are furthered online because of these systems that are in place. So, when we talk about bias in AI, it could be something like as simple as a gender bias, right?
So, gender biases should be avoided, but it can't show up in our data. So, for example, go to a Google Search engine and we look up the greatest scientists of all time. You could see prominent male personalities and see if you could look at the number of women that show up in that list. So if you do these kind of things, it's a little strange, right? Because then we start depending on the keywords we're searching, certain filters will show up, and certain images and links will show up that will enable those biases. So, then the question is, in this case, I'm talking about a gender bias that should be avoided all right, with the very least minimized in the development of the algorithms. And this usually starts with the training set that we're using and these very large datasets that are used for those learning,
AI in Art and Copyright Concerns:
Also, we need to make sure that they're balanced in some way and especially when we're trying to use AI for decision-making. So, this is how we primarily should be dealing with this. But let's move on and look at other interesting edge cases of AI, and one of them is AI and art. AI can now influence human creativity. In fact, the use of AI in culture will raise interesting and ethical considerations as we move into the future in this AI-powered world. So, in 2016, we have an example of a Rembrandt painting and it was designed by a computer and generated by 3D model. And this is a 351 years after that painter's death. So, then we have to start asking our questions is that OK? Is that all right? Is this a copyright issue? Like what does this mean?
So to do that, we first have to start asking question is what defines 'creator'? In this case, it's a machine learning model or deep learning model, or a AI model, right? But is this now going to be considered a technically a creator? What does that actually look like when it comes to AI and art? So, those are just some ethical considerations that could potentially change as legislation is built around these AI.
So, if we further look at the developing of frameworks to differentiate things like piracy and plagiarism, we need to make sure we have to understand what these things are, what's considered original, and what's considered creative. And we have to recognize the value of human creative work.
And so it's also important that we prevent the exploitation, whether it's intentional or not. And so, this is becoming a interesting field now with the ethics of AI specifically in art and copyright material. How does this actually fit into our framework that we currently have? And we need to avoid deliberate exploitation of this work and creativity of human beings. We have to make sure that things are adequate and there's proper recognition for artists. So, the integrity of this culture, of the value chain needs to be preserved across the use of AI.
AI in the Judicial System:
Another interesting edge case is the judicial systems. The use of AI in judicial systems around the world is increasing, and it's creating more ethical questions to explore. Can AI affect the legal professions and the legislators? AI could presumably evaluate cases and apply justice in an even a better, faster, and more efficient way than a judge. But should we rely on that? That's another question. It can imagine if you just had a machine that's trying to decide whether or not you're guilty over your parking ticket, or you've done something incorrectly, or you've run into some legal trouble.
It's all sorts of interesting ideas of is this allowed. Should this be OK? And how often? How should we use AI in the judicial system? Can they be used as an aid to help our practitioners or should they completely be banned? Even though they increase the efficiency and accuracy of lawyers in both counseling and litigation, what else could happen? We don't really know.
And just a further, look at the ethical concerns with AI and the justice system, well, there's a lack of transparency, these AI tools is the one thing, AI decisions are not always intelligible to humans. How many times have we've given a prompt to a large language model like ChatGPT and it spat out some information, but we have no idea what it did to get that information outside? Now that's just with a simple prompt and response, a question that you had for it. But imagine if you had something very serious, like whether or not you're going to jail or whether or not you have to pay a fine or you're getting sued. All sorts of these things now would lack transparency. We have no idea of what considerations it's making. And another thing that's interesting is that there are surveillance technologies and surveillance practices for data gathering and the privacy of court users. Is that allowed? Is that going to violate privacy? Everywhere you're going, there are cameras and technology that's tracking you.
There's also something known as the lack of neutrality. Now, the lack of neutrality is really important because AI is not neutral. Remember, these are very complicated programs that are just running mathematical operations to reduce a loss function. So if you think of this lack of neutrality, these decisions that they're making are susceptible to inaccuracies, discriminatory outcomes were inserted bias. And then there's another interesting thing is that there's new concerns for fairness and risk for human rights and other fundamental values. Is this something that we want to make sure that we're allowing by using AI in the judicial system?
And these are complex legislative questions that need to be answered. But it's an interesting idea is that with these human rights concerns and the main one is privacy and is it actually biased in any way, is it racist in any way? Like these things could affect how this model will perform.
Autonomous Vehicles and Moral Dilemmas:
And another important thing that I want to discuss is known as self-driving vehicles. And this is the Holy Grail I guess of self-driving vehicles is of AI, is that if we could just have cars that drive themselves from one location to the other. So, this is the idea of the autonomous car. So then, if we're thinking about the autonomous car, and really, all we're talking about is just a vehicle that's capable of sensing its environment and moving with little or no human involvement, that's really what I'm talking about.
So, the autonomous car really must undertake a considerable amount of training in order to understand the data it's collecting. So then there's moral choices have to be considered. Imagine an autonomous car, right, with broken brakes going at full speed towards a grandmother and a child. By deviating a little, only one can be saved. Which one does it save? That's the kind of morality that you have to understand. Now you've got an AI that's potentially controlling a could be a weapon.
It could be considered a weapon as a vehicle. And so, what happens then? So, those moral decisions are made by everyone daily, right? When we drive, when a driver chooses to slam on the brakes to avoid hitting a jaywalker, they are making a moral decision to shift risk from the pedestrian to the people in the car. That's just what happens instantaneously by us. But now we've got this autonomous car situation, so who's doing the choosing in that situation?
Who's making the call of whether to hit the grandmother or the child? And if we don't have a human driver, who's going to take that decision but the car's algorithm. And so, then it gets even beyond AI. And this comes into a philosophical question, almost as how do we even choose what decision to make and when? And these are all questions that have to be answered before we start incorporating AI in our every single day lives and understand what they mean. Before we have an autonomous car, it's important that we understand the ethical considerations of having an autonomous car run by an artificial intelligence.
Final words:
So, these are the AI ethics policies that we discussed today, at least as of the time of this article. I imagine that as time goes on, there's going to be more avenues and concerns that are going to pop up as AI becomes more prevalent in our lives. But this is just a start.
Comments
Post a Comment