











π When the potential porn ban came out, did you make a f a n s l y π₯ Tip $5 if you are interested in the topic of Artificial Intelligence and want me to write more about it. ππ Thought I would post some stuff about artificial intelligence because I have a subscriber that is a philosophy graduate student and studies this stuff for his research. If youβre a graduate student and study something cool, share it with me. Even if you donβt study it formally, share with me your hobbies and interest. I would love to hear about it. β With morality, comes the conditions in which an individual can act morally. One of the conditions is autonomy. Can a machine be autonomous in the way that we expect natural persons to be or is there a distinct way in which they exercise this type of autonomy that potentially can make them culpable for their actions? An AI is viewed as a tool that is programmed with a set of functions and can only act within the bounds of the orders programmed inside it. Programming an AI that is autonomous would be beneficial because it would be able to make decisions apart from the constant input of a programmer. One of the reasons is that in many circumstances, the physical and mental abilities of human beings are substantially limited compared to that of a programmed machine. For example, a somewhat autonomous agent wouldn't stall when a decision has to be made, which could be highly important in situations where time is limited, like if you have an autopilot that needs to decide how to crash a plane. An AI can be taught how to be a moral agent in the same way, a c h I I l l d is taught ethical principles that shape their behavior. This all boils down to the realm of knowledge acquisition and what type of functions should the programmers bear in mind when developing the machineβs learning processes when making decisions in response to its environment. Will they be designed to learn what is a moral or immoral decision? If so, will they learn it on their own or from a human teacher? In machine learning, there is a distinction between supervised and unsupervised learning. In this case, supervised learning would entail a human "teacher" evaluating an AI's decisions as moral or immoral, and then the AI would learn what is moral from the results it gets from the teacher. Otherwise, with unsupervised learning, the AI would have to figure out for itself what is a moral decision. How does that translate into teaching a machine right from wrong? Computer Scientists use neural networks to build a system that is modeled like the human brain. When we use neural networks, we have millions of randomized parameters which are gradually refined as the network learns. These networks are combined in such a way where you cannot explain in english why it decides to do what it does, all you can do is understand the mechanism by which one is created. A human brain however could not hold the algorithm the neural network describes in his or her head because of its large size. For example, if you were to have a neural network to classify english letters based on an image. The only parameters you would get are either "this pixel is on" and "this pixel is off" for every pixel in the image, and then the neural net is basically a giant boolean string of those parameters. Ideally morality would have tunable success parameters that we can use to train the algorithm through a combination of supervised and unsupervised learning techniques. Feel free to share your thoughts