top of page
Zoeken
  • Foto van schrijverTusem

#07 Explainable Artificial Intelligence by Saja Tawalbeh

Disclaimer: This text is a transcription made by Natural Language Processing.


Tusem: [00:00:00] This week we're going to talk about technology. And to be more specific, artificial intelligence. I have the privilege of hosting Saja Tawalba, a talented PhD student who is delving deep into the fascinating world of explainable artificial intelligence. I met Saja Tawalba during an event, Women in Data, where we connected instantly and we had like common interests and I'm happy to call her my friend right now.

Saja's journey as a PhD student in the field of explainable AI is just not impressive, but also Incredibly inspiring. I saw her curiosity, her dedication and passion. I think women like her deserve more exposure. Therefore I invited her onto the ThinkWorks podcast. She's graduated from Jordan University of Science and Technology, both with her bachelor and master, where she also became a research assistant.

She got skilled in many fields of machine learning and deep learning, such as long, short term memory. Gated recurrent units, XGBoost [00:01:00] transformers, and traditional machine learning. She also published many papers in the field of natural language processing, which we'll be also talking about. But right now she's associated with the University of Antwerp as a PhD candidate with IDLAB.

She gained skills in convolutional neural networks, capsule networks, and explainable AI, the topic of this week's episode. But there is more actually. Did you know that according to a UNESCO report, Arab countries are witnessing a higher percentage of women STEM graduates compared to universities in the US and Europe.

In countries like Jordan, Palestine, Saudi Arabia, and Egypt, women led firms are also creating more opportunities for women in executive roles. With this conversation I have with Seja, I hope to offer a glimpse into Seja's remarkable journey, her research in explainable AI, and her experiences as a PhD student.

I believe her story will not only resonate with [00:02:00] those in the tech field, But hopefully, encourage more women across Europe to consider careers in the IT domain. So, hello Saja, Salaam, welcome to the Thinkfork podcast.


Saja: Hello, Salaam, um, thank you for having me on the Thinkfork podcast. It's a pleasure to be here with you today and discuss about my journey and Explainable AI, which has become One of my greatest passions,


Tusem: I mean, passion.

I think it's so important if someone has a passion, that means you can talk hours and hours about it without the conversation being awkward or something. I always like talk with people who have, who has passions and Saja is one of them. Actually. Let me just first thank Idealab for having me here at their offices, The Beacon.

I have such a beautiful view just in front of me. I can see the Scheldt, I can see Maas. I mean, I can see the, no, I can't see the, is that a cathedral over there? Yeah. Yeah. Oh my God. That's a [00:03:00] cathedral. Yeah. So it's, it's, I'm so thankful for them, but also thank you for being my first. For being my first guest here at the ThinkWorks podcast.

So Saja, tell me what ID. Lab exactly does and where is it situated actually


Saja: in Antwerp? ID. Lab is a joint research institute between IMEC and University of Antwerp. ID. Lab performs fundamental and applied research on internet and data science. We are almost Just more than 250 members and some of us located here in Antwerp and some of us located in Ghent.


Tusem: Ghent is also beautiful, by the way. So you have a big team in here in Idealab. So you are right now in Antwerp. I know, like from the introduction, you came from Jordan University and now you are in Antwerp. How come? Like, how are


Saja: you here? So, um, I came to Antwerp almost two years and eight months ago to pursue my PhD.

I [00:04:00] got accepted as a PhD candidate at Idealab. So I couldn't believe it that time. I was jumping everywhere at the house. I was so happy with my supervisors and me that accepted acceptance. So I was so happy back then. And to be honest, I really feel lucky for this opportunity. And I'm doing my research here at ID lab in the field of explainable AI more precisely.

My research focuses on explaining networks that are interpretable by design.

Tusem: Interpretable by design. We're going to get into detail in that in a few minutes. But I also want to ask about your family, about your background. How is your how is your village? How is your environment in Jordan?


Saja: Okay, so I came from it.

But it is a very beautiful city located in the northwest of Jordan, about 80 kilometers south It's north of the capital, [00:05:00] a man. I hope that I mentioned the number correctly.

Tusem: Otherwise Jordanians will come here for you. Yeah, it's okay.


Saja: It's the second largest city in Jordan, and it holds significant culture, economics and educational importance.

And one of the standout features in Irbid, Yarmouk University, and that university or the university's presence adds a lively atmosphere. And dynamic atmosphere to airbed to the city itself. So you can't believe it until you see it for me or for the most people in airbed. The university street. It's really, really beautiful street.

Tusem: Does it feel like a metropolitan city?


Saja:  I don't want to exaggerate. But for me, I feel the Yarmouk it. University Street. It's similar to Istiklal Street in

Tusem: Istanbul. Istiklal Caddesi, yeah. Um, yeah. Wow. Okay. Yeah. [00:06:00] It's also busy there like with a lot of people. I can't. Too much people and at a place, even though it's very beautiful, I just need to get out of there because I feel really overwhelmed sometimes.

Yeah. So, Istirajat, this is not my taste for, to be honest. But, uh, interesting and I know that Jordan University is actually a high ranking university. How did you actually get there?


Saja:  I had a very good, uh, GPA at high school. So I applied for mathematics first and then computer science and then I get, uh, the admission for computer science.

Because they rejected me in mathematics, and I'm so happy for that prediction.

Tusem: She told me. Some time ago that she's not good and in math, but I don't believe that I think because you are a researcher, you're criticizing yourself too much. I think you're really good and talented in math, by the way, because I think you for artificial intelligence, you also have to like an [00:07:00] understanding of math, right?


Saja: Yes. Yes, that's really true. Okay. Can I continue about a bit? Yeah, sure. Okay. So, um, Okay. Okay. I'm not from the city itself. I came from a big village located at the border of Jordan called Saham al Kfarat. It's a very beautiful place surrounded by the nature, mountain and um, the Yarmouk River as well. And Yarmouk River, it's connected or runs in Jordan, Syria and Palestine.

So when I'm at the roof of my house I can see three countries Lebanon, Syria, Palestine. So for me, it's a wonderful location. Wow.

Tusem: Three countries at once. Yes. So tell me about your home, actually, and your family.


Saja: Um, I came from a small family compared to other families in Jordan. I have three brothers and, of course, my parents.

We are really small and we are really connected to each other. [00:08:00] So when I came here to Antwerp, I faced a lot of difficulties, to be honest. I have never been separated from them. And that's a little bit hurt me a little. I think where I live, it's a wonderful place. So I really love where I came from. And as a child, I was really active child.

I spent most of my childhood hiking in the mountains. And I think this helps me to build the person who I am today. Uh, for example, to have a patient personality, I can say I've never give up on something. I really want, and it's a fact for me, I'm well known about this, um, this fact. I can

Tusem: really understand that because where you are right now on your PhD and the fact that you are like here in like total different country and you're still continuing.

I can see those characteristics on you. So you say that it's actually, it's all coming from your childhood.


Saja: Yes, yes. And one more thing I want to add, [00:09:00] one of the most beautiful moment for me as a child, I still remember that it was in 2005. My teacher at school gave me a paper. She wrote at the end of the paper, suggest study hard to be the cutest professor at the planet.

So this sentence is still tickling in my mind since

Tusem: then. You know, that's, that's the thing with the teachers. They immediately leave a mark on your like subconsciousness. It really affects your personality in the future. Like dropping here and they're talking about explainable AI, but I think it's now time to really get into detail what your PhD research is about.

If you would try to explain this, like as easy as possible, because not everyone has a technical background.


Saja:  So as a PhD here at ID Lab, I'm focusing on explainable AI and in general, in explainable AI, we are focusing on two research questions. First. [00:10:00] Interpretation and in this case with the model has actually learned and in this task, we the model provide insight on aspects that the model internally encoded.

For example, if we if we pass to the model group of images related to a cat, we are expected to the model to tell us that there is a feature looks like a nose of a cat. This is one example, but also there is a lot of, um, task related, a lot of, uh, I mean, methods how we can do in interpretation. But I think these are too technical, but so I can't explain more than that.

It's in a high level way for interpretation. Well, we think, um,

Tusem: For a human, it's really like easy to see whether something like an animal is a cat or not. It's really easy because you can immediately make that decision. For a computer, I think it's really hard and it's really complex. I think we are overlooking the value of [00:11:00] how complex our brain is actually.

So you have interpretation and you have explainability. Explanation. Explanation. So the explanation is that the machine can actually have like, Something to show you that there is explanation.


Saja:  It's always related to the prediction. If, uh, if the model predicted there is a cancer in the in the image, so the model.

Should highlight. Where is the region? Let's say where is the region of the image? Tell us there is a cancer in the image.

Tusem: Okay. And interpretation is actually it's like the characteristics of something like you're looking for. Like you said, the nose of a cat in order to understand


Saja: simply like this. Yeah.

Okay. The model actually look looking for features that specify that object. For example, a cat. The cat are well known about their cute nose. So That the model when we give a bunch of images for a cat that can recognize, for example, the nose of a cat. Yes. I

Tusem: see. [00:12:00] Um, you're talking also about some models being black box and white box.

Is this the right moment to  tell us more?


Saja: Yeah, yeah, yeah. Indeed. Indeed. And so now after I explained these two really important research questions, we, we can say that also explainable eye is a very wide domain. As you mentioned, there is A white box. And in this case, we are talking about model that are transparent to human being.

And these models, we can say example of these models. It's a decision trees, and these models are designed by if else statement, so it's easy to follow how the decision was made. On the other hand, we have the black box models, and these are a bit complicated. In this case, these models most likely applied for deep in neural network.

I will be talking specifically on convolutional neural network, which is also your expertise. [00:13:00] Not directly, but yes, I can say yes. Yeah. Just say yes. Um, not really because my expertise, not like explainable AI, not with convolutional neural network, but with capsule network. So As I mentioned, convolutional neural network would be a perfect example.

And convolutional neural network, it's built from many layers. We have the input layer and between the input layer and the output layer, there is many layers. And these layers interconnected with each other and Those connections together. It makes a high complex model and high computational. So we can follow as a human being can follow how the decision was made made by the network by looking at this connection because it's million thousands of parameters at some time,

Tusem: which is also not safe.

For instance, if you want to see whether a CT scan, there is a cancer in it or not. If it's a black box, it's like You [00:14:00] don't know if it's accurate, I don't know, accurate or not.


Saja: Yes. And that's why we are talking about both talk methods, which would be applied to black box models. So how can I make the connections for you?

So why we call it post hoc methods? Because these are external methods, we optimize them, and then we apply them to the original model to explain the predictions. So, for example, we have AI system, and the system, it's performed very well, high accurate model with predictions, with giving us a good predictions.

Thank you But now we want to explain, we can't open it and understand how it makes predictions. So we as a researcher, we can optimize external method and attach it to the original method. In this case, we can explain that black box model you're

Tusem: talking about an external methods? Methods, yeah. These methods you are using, is it [00:15:00] also globally used?

Because I suppose there are many black box systems right now. That companies probably are using. Do they also have, do they also care about explainability? Maybe I should ask this  way.


Saja:Yes, I can say recently it has a lot of attention. So now I was searching for some papers and I can see that the researchers start writing the title for the papers.

Explainable classification for x ray images. Um, that was the name. So now they are start caring about explaining the model because yes, we need to understand for me. Also for me as a researcher, sometimes I'm worried, like, okay, how come these models give a very high accurate? So we need to understand how this is, you know, we're made.

So you

Tusem: tell us a lot about. The black box and white box method and the post hoc method. Right now you are like focused on a different methods. Can you talk, talk about your [00:16:00] research right now? Your PhD research at IDLab


Saja:  Okay. So now, as I mentioned earlier, we have a white box and we have black box and. In 2017, neural networks have been introduced.

This is this network. It's a little bit advanced neural network, and it's called in general, interpretable by design and 'cause we need something in between too simple and too hard. So they introduce interpretable by design, and in this case, we don't have to add extra component into the top of the original model to follow.

How the mother provide these decisions like you do in black box. Yes. And, um, in this kind of networks, this kind of networks follow the logic order of a human being. And they have a mind of understandable for a human being. And this type of repeat that they have a mind. Yes. [00:17:00] Uh, let's say, um, like not a mind, come on, it's, um, like they have the logic order of a human being.

They have mind of understanding the logic order, like this kind of network designed to have. This hierarchical relationship between the parts and the whole. What you're

Tusem: trying to say is they don't have a consciousness, but they have a logic.


Saja: That's what they are trying to tell us in the paper. They follow the logic order of a human being by implementing or by providing this hierarchical relationship between the parts and the whole.

Can you give us an example? Yeah, let me explain about what I mean by the parts and what I mean by the whole. So just a simple example. For example, imagine that as a human being, we have our head, our hands, and our mouth. So what I mean by the part, it would be the hands and the head and the mouth and the rest of the body parts.

Yes. [00:18:00] Okay. And these parts should be sent to the next level of the network. And the next level of the network will be the hole and the hole. Should be accepting some information from the parts level. So now the parts level, the human body, like the hands, the mouth and the head will send information to the whole.

Also, for example, we have a chair and for example, a laptop. Also, they are going to send their information to the whole level, but the whole level knows what, what kind of information he wants to receive. So he will only accept. This information from a hands and nose and the head of the person and he will ignore this kind of layers will ignore the rest because it's

Tusem: looking for body parts, you know what is expecting.

Yes. And therefore it's going to only get the,

Saja: the, the hierarchical relationship. I hope I [00:19:00] explained it really well because to be honest, I can't explain it easier more than that. It's a really, really complex architecture. It took me one year to get to know this architecture.


Tusem: You're doing your PhD, so I can understand why this is so hard to explain.

But I think you did a, you did a good job on that. Let's just take a step back and go to your, like the beginning of your studies. Well, I know that, well, the fact is in Europe, IT is not a domain that women will go for. Unfortunately, but I heard and I saw also some articles that is not the case for, for Jordan and Palestine as well for like the Arab countries, it's the opposite way because I have an example just in front of me, I want to ask you what actually inspired you to pursue a PhD in this specific field?

Area

Saja: of research. Okay, so pursuing a PhD in [00:20:00] explainably I can be driven by combination of my passion of AI and bringing the gap between artificial intelligence and the humans. I mean, like explaining the decisions and academic curiosity. It was the passion for learning. For me, we never stop learning. So every day is a new chapter.

That's how the way I think. And I can add, I always wanted to be someone who has the knowledge to help others as much as I can. And one more thing. As a part of the PhD training, I have the opportunity to supervise students, and I really love the development from the moment that they started with me to the moment that to the moment when they graduated.

So I see the development is rewarding for me. And to be honest, I see that On myself as a student. Like when I came here to Antwerp, I was completely [00:21:00] different person. I have never heard about explain a very high or capsule network. Wow. So now I'm seeing myself. Growing. Really? Not okay. Growing, growing, but I'm like, yes, grow.

Tusem: But explain us with more words. Like how do you feel looking back to yourself? The first time you

Saja: came to Antwerp? I feel proud of myself. Yeah. I feel really proud of myself. I can, I can

Tusem: count on that. Um, I want to also ask about the fact that you were. Supervising students, you said, how does that feel really?

Because you were, you're, you were a student, okay. You're still a student, but you're on your PhD and you're not at school banks anymore. How does that feel like supervising other students? Do you remember the times that you were also?

Saja: For me, it feels really weird [00:22:00] and it feels really good as well because I want to share my knowledge and I want to see how the students react. Like I always want to help people with sharing my knowledge and I really love that part of supervising students. I can't

Tusem: imagine that there are also listeners that are also students and they're planning to go to another country to study like with Erasmus or something.

Can you have, do you have like tips for

Saja: them? You have to be patient. You have to be passionate, first of all, and you have to be disciplined and consistency at the same time. And you have to open for. Other opinions, even negative or positive. So I think this would be like the three tips for

Tusem: them. While we are also talking about students, I got asked a lot about whether you need mathematics in order to have a job in IT.

Because I'm also in the IT industry. Well, in my case, it was not [00:23:00] necessary for my specific branch. I tested software, but I also did programming and I know in programming, that will be very interesting if you have mathematical background, like an understanding even, but it's not always the case and definitely not.

But I'm wondering how is the situation in AI, if a student wants to study, what's your take on that?

Saja: Mathematics is indeed crucial in understanding AI. But there is two paths in the eye. You can go for engineering or you can go for fundamental research. So AI involves complex, complex algorithms, statistical models, optimization techniques that rely heavily on mathematics, and some of these concepts commonly used in AI, for example, are linear algebra, probability Statistics and optimization.

Tusem: So it actually depends on in which direction the person will go.

Saja: Engineering or research. Indeed, engineering or [00:24:00] fundamental research. Fundamental research. For example, um, that's exactly what's happened to me. The research I have done in Jordan, it's completely required a simple level of mathematics. Let's say I was doing my Research and engineering point of view.

But when I came here to enter, when I joined the lab, I had to dig deeper in math. And now I can say I have to write my own formulas that Wow, even they are simple, but they have they should be. I've never introduced before in the field of AI. Yeah. So I need to learn that skills. I can say I'm professional now, but I'm proud of myself.

How I like I came here without that knowledge and now I can do it.

Tusem: You can say that you can make a switch from engineering to. Fundamental

Saja: research. It's possible. Yes, of course. Nothing impossible

Tusem: in life. Oh, yeah. I shouldn't have asked that question in the first place to you. Okay. [00:25:00] Before we, we close this part about students, uh, let me ask you one last question.

What advice would you give to aspiring researchers or students that are interested in pursuing a PhD?

Saja: Okay. So my, my message will be, um, if you have an idea or a dream, just believe in it. And more importantly, believe in yourself to make it happen, and in my opinion, to make it happen. We need to be passionate about what we are doing and accept the rejections that will be facing our journey.

And as a PhD students, we face lots of rejection like for our papers, and we don't want this rejections to break us.

Tusem: That's a good exercise for, uh, for those who can take rejection. Just be a PhD student and you'll be fine.

Saja: I can also tell. So being disciplined is very important to move forward with a PhD.

That would be my advice. [00:26:00] Just chase your dreams because for me, a PhD, it will never be a normal job.

Tusem: So you are doing your research, PhD research on Capstone Network and you just gave us like a brief. Idea about what that is. Can you just give us the key goals like the objectives of your research right now on capsule network?

Saja: Key goal number one, based on the fact that capsule network getting attention or insensitive domain health and military domain and why it's getting attention as a researcher or as AI companies. They are looking for high accurate models and capsule network. They are beating deep in neural networks, for example, applying using convolutional neural network.

So that's why capsule network is getting attention. And now our goal as explainable AI researcher, we want to tell that. Why? It's important to understand capsule network because the interpretable [00:27:00] capabilities have not been fully assessed. So if you look for research paper, you can't really see very much papers that Uh, explain that is the decisions comes from capsule networks and you want to

Tusem: make a change on it because there's not many research done in it.

So this is the first reason why you started your PhD reason because there is no research

Saja: at all. Yeah. Also, there is not much research introduced in the field. So that's why we are doing this. We're taking the initiative actually, kind of, and one more thing is really important that The core of capsule network is built based on the hierarchical relationship between the parts and the whole relationship.

So we want to analyze, we want to understand or verify if this relationship existing capsule network. So let me add like to make it more clear what we are trying to do. We are, for example, we are. Um, to explain this kind of networking by [00:28:00] providing a huge principle experiment using visualizations and to verify whether these visualizations are intelligible and what I mean intelligible?

Intelligible? Yeah. That means that they are understand this visualization understandable by human being. If we visualize some images from capsule network and show it to you. Okay. I ask you to sim. Can you understand what this visualization is? If you say yes, I can understand this. Yes. Then yes, we can say explain or sorry, capsule network provide explanations, visualizations, and these visualizations are intelligible.

The second point is we pay special attention to the part whole relationship. Why? Because it's the core of capsule network. And in this case, we have two layers. Let's imagine we have two layers. The first one is the high level [00:29:00] layer, and the second one, it's the whole layer. Or the classification layer. And in this case, we obtain relevant features from the intermediate layer.

And then at the same time, we, um, we obtain features from the classification layer. And then we computed the response for these features together. And then what I mean by the response, we created kind of heat maps. And these heat maps, as I mentioned earlier, with explanation parts, we This heat maps should highlight the important regions based on the decision that capsule network made and then by having these heat maps from the high level layer and the classification layer, we overlap these two heat maps together and we end up with results or we end up with, let's say, a [00:30:00] Numbers or it depends what we are trying to do.

And then we report our results based on this overlapping what we achieved based on the overlapping. I'm not sure if anyone will understand what I'm talking about. But of course, if anyone has about what I was talking about, you can, you can reach me out. You can provide my LinkedIn, my email,

Tusem: I will add your LinkedIn profile or your email address because

Saja: I really, I really believe that this kind of research is too technical, even for some computer science researchers.

Tusem: Yeah, but I also see you're like happy to explain people what your PhD is in detail. I know that I asked Saja like one question. Maybe four times. You are truly patient in this PhD research. She's still like. She does her best to explain me her research, and I'm [00:31:00] really thankful for that.

Saja: You are most welcome.

Tusem: You recently gave a speech here in Antwerp, uh, for Closety, uh, the women in tech community. You had a lot of engagement, actually. How were the

Saja: feedbacks? I really loved that presentation, actually. I took the step to overcome one of my biggest fear, which is talking in public or simply giving small presentation.

I'm always afraid of giving presentation, or I don't know. It's just terrifying for me. But I received a lot, lots of feedback, lots of good feedback. I'm not sure how, how many people were there, but I assume that more than 50. Yeah, yeah, there were a lot of people. This is the first time for me, I been speaking like in front of a huge public for me for a huge number of people for me and I saw how the audience was interested in explainable AI and I got [00:32:00] some amazing feedback and one of them, for example, that was amazing how you could explain such a complex method in such an easy way.

So that's one of the, I think that's

Tusem: also the best compliment a PhD student can

Saja: get, right? I think so, yes, because we need to explain our research like four years and I'm not sure if it's 15 minutes or half an hour. I wish you

Tusem: all the luck on that day. You're almost like two, two years and eight months working on your PhD research.

And are there maybe breakthroughs that you can share with us on your research? That's like, you are really proud

Saja: of. First of all, I'm proud to be here as a part of ID lab. And I'm especially proud to work with Jose Oramas, who's currently my supervisor. Like working with him has been such a pleasant experience for me.

And that's like whenever we finish a specific [00:33:00] research specific research idea, we feel that we are proud of our outcomes and conducting research and explain a billy. I especially for capsule network. So I believe this research is unique at this point. More precisely the way how we introduce our experiments.

So we have looked deeply in in capsule network architecture. We're trying to understand how this complex architecture works to make it more transparent and understandable for a human being. And I also want to mention like one research I'm so proud of. It was the outcomes from, from the outcomes of my master thesis.

So that research, it was related to natural language processing. Is that the one about the tweets?

Tusem: Yes. Yeah. I really liked that one. Tell us what's, what

Saja: is it about? It's about detecting emotions from tweets, but [00:34:00] not only detecting if that person angry or happy or. Um, somehow sad, but we can detect how much the person angry, for example, is he 90 percent angry or 50 percent or he just said, or he just angry.

Sorry. So that was my, that was the key goal of that research at that point. That's

Tusem: really cool because, uh, when we WhatsApp or something, we do not, we have to use emojis. Let's say you are in a serious conversation and you use like an emoji that conversation is not serious anymore It's daily or it's like funny.

So even for humans, it's hard to understand emotions through I mean through text messages That's very interesting. You can even calculate how angry or how happy or how I mean how sad the

Saja: person is Yeah, and I want to mention also [00:35:00] Um, I don't know the outcomes from that research. Like everyone was obsessed off trying deep learning and neural networks.

But in that case, I decided to try extinguished and it works very, very well and compared in comparative with deep learning, for example, LST Ms or convolutional neural network. And at this point, I'm still, I'm asking people when they are doing research. Like how much your data set is, how big your data set or, for example, do you work with numbers or, for example, forecasting?

So I directly introduced them to please try XG post and let me know. And after two weeks, they come back to me. Yeah, Saja, it works well with us. It's better than LSTMs and I'm really happy with that research paper. And one more thing. That research paper, it was kind of a competition. It happens [00:36:00] yearly for a natural language processing field.

Is that something

Tusem: in Jordan then? No, no.

Saja: It's around the world. It's global. And my result at this point, after two years, I'm still ranking number one in that specific, on that specific. Um, competition. Wow. Congrats. Thank you. So, yeah, I really like that research paper.

Tusem: It's a very cool one. That's also the one that I always bring forward when I talk about you to friends.

I'm wondering also, like if someone is passive aggressive, like they don't show their aggression, like they have these microaggressions and it's also possible in text messages. Yeah, and it's still, it's still accurate.

Saja: You say. Maybe you can analyze it from the context, but sometimes also AI models, they give wrong predictions.

Yeah, that's always. Yeah, especially with text data, it's really, really complicated. So

Tusem: there is a thing going on with, uh, with AI. It's a thing now and you can use it [00:37:00] everywhere. But there is also an alarm going on, especially after introduction of ChatGPT. And I also saw on social media, like pictures of Sam Altman, the CEO of Open AI.

Apparently he has a backpack where he has like a stop button to stop all the servers of chatgivity if something goes wrong, if AI will take over. I don't know on which level that's true. I hope that's not true. But I want to ask you that question because you also asked this question at the end of your speech at Closety, will we be replaced by AI and do you think we should be afraid?

That's like the popular

Saja: question. Yeah, indeed. In my opinion, AI just. It's a tool that revolves around humans, and if we use it properly, there is no need to fear of being replaced, of being replaced. AI, like any other machine, it remains under our control since we are its creator. Well, that's also

Tusem: why people are afraid, because [00:38:00] It's like teaching a child something, and then when the child grows up, that will be a good or bad person.

That's where everyone is actually concerned. Who is training and how will it be used, I think.

Saja: Yeah, of course, the human who is training as a human, like as a human researcher. So, like, therefore, the responsibility lays With us. So if there is anything to be afraid off, it would be the misuse or mishandling of AI by human by humans.

In the end, I would say just we have to admit the idea that I would become our assistant in our daily life. And I hope this will not scare many people. I can understand

Tusem: why you're saying that. I don't think people will be afraid of using AI in their daily lives because they're like starting to do it. I think we should be afraid not by AI, but by the human.

I think that's also what I. Can deviate from your answer. AI will be our assistance, but there will be also like a [00:39:00] data transfer between the AI system and us. And how will that data be used? That's the concerning part. I think AI creates also another domain where it can actually provide work for people, new jobs

Saja: for people.

Not everyone are familiar with AI. That's the problem. Yeah, that's

Tusem: true. So the message is you should be familiar with AI.

Saja: I think so. I, I think I can see it like everyone, even not a specialist in computer science. I see from business, from other domains who are trying to learn Python and get involved with how machine learning works.

Yeah. So I can see it.

Tusem: I know that before Chachapiti was a thing, like before the rise of AI, I remember I was a following an entrepreneur on social media, and he always said. Uh, you should know coding, like the basics of coding, even though you are a doctor, even though you are a teacher or like a pharmacist, you should know programming, like in [00:40:00] some case, because it will be everywhere.

And now I think it's now switched to, you should know the basics of AI and implementations or like on a practical way. And I think the topics we talked today opens a door for other, hopefully women to go more detailed in AI and go for AI actually. So I think we did a good job here.

Saja: I think so. We should be proud of ourselves.

I hope

Tusem: so as well. Thank you so much, Seja. I mean, it was, uh, it was really refreshing to speak you here.

Saja: I can't tell you how much I enjoyed as your guest today. So you are such an amazing host and I really wish you all the best in your journey with the Think World podcast. Thank you very much. Thank you so

Tusem: much.

This brings us to the end of this week's episode. If you have any value from this conversation I had with Seja, Please consider to follow me on whatever platform you're listening from. Also, don't forget to turn on your notification to get notified whenever I post a new episode. You can say Salaam on the [00:41:00] Instagram page and I will see you on the next one with more thinking work to do.

Salaama.



2 weergaven0 opmerkingen

Recente blogposts

Alles weergeven

Comments


The Thinkwork FINAL (5).png
bottom of page