Season 1 – Episode 36 – Vibe Research and the Future of Science
Discusses vibe research or science, which is an emerging approach to scientific research using AI.
Podcast Chapters
Click to expand/collapse
To easily navigate through our podcast, simply click on the ☰ icon on the player. This will take you straight to the chapter timestamps, allowing you to jump to specific segments and enjoy the parts you’re most interested in.
- Introduction and Guest Introductions (00:00:03) Host introduces the podcast, topic, and panel of experts, who then introduce themselves and their backgrounds.
- Defining Vibe Research (00:02:22) Panelists define “vibe research,” distinguishing it from traditional AI-assisted research and citizen science.
- How Vibe Research Works: Examples and Applications (00:04:38) Discussion of real-world and hypothetical examples of vibe research, including full AI-generated scientific papers.
- Ethical and Methodological Challenges (00:07:27) Panelists discuss AI hallucinations, reproducibility, and ethical concerns, including bias and misuse of AI-generated research.
- Systemic Issues and the Need for Change (00:10:55) Exploration of how AI amplifies existing problems in research culture and the need to transform incentives and infrastructure.
- Opportunities and Responsible Use of Vibe Research (00:12:54) Panelists debate whether vibe research should be avoided or embraced, emphasizing the need for critical understanding and human oversight.
- Education, Training, and the Role of Young Researchers (00:13:57) Discussion on adapting education and research training to responsibly integrate AI, and the importance of including young people.
- Types of Research Suited for Vibe Research (00:19:03) Panelists identify research areas where vibe research is useful (e.g., reviews, information retrieval) and where it is risky (e.g., synthetic data, high-stakes fields).
- Human Oversight and Qualitative Research (00:24:45) Emphasis on the necessity of human engagement, especially in qualitative research and contexts requiring deep interpretation.
- Maintaining Public Trust in Science (00:26:35) Strategies for upholding public trust, including participatory research, transparency, and engaging non-traditional actors.
- Tools, Fact-Checking, and Institutional Responsibilities (00:30:10) Suggestions for improving methodological rigor, fact-checking, and the need for institutional incentives for integrity and openness.
- Communicating Science and Public Engagement (00:33:02) Importance of communicating research to the public through diverse media and fostering engagement beyond traditional academic outputs.
- Resources and Critical Thinking (00:34:26) Advice on staying informed, promoting critical thinking, and not relying solely on major AI companies for direction.
- Final Reflections and Cautions (00:36:57) Panelists share closing thoughts on the potential and risks of AI in research, emphasizing responsibility, humility, and the need for new methods.
- Conclusion and Podcast Outro (00:41:39) Host thanks the guests, promotes related resources, and closes the episode.
Episode Transcript
Click to expand/collapse
Daniel Smith: Welcome to On Tech Ethics with CITI Program. Today I’m joined by a team from MIT Critical Data, and we are going to discuss vibe research, which is an emerging approach to scientific research using AI. Before we get started, I want to quickly note that this podcast is for educational purposes only, it is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws and regulations that may be discussed in this podcast.
In addition, the views expressed in this podcast are solely those of our guests. And on that note, I’m looking forward to today’s conversation. So just to get started, we’re joined here again by a panel of experts from MIT Critical Data, so can you all just go around and briefly introduce yourselves, starting with Leo?
Leo Anthony Celi: Hello, my name is Leo Anthony Celi. I am a medical doctor working in the intensive care unit at Beth Israel Deaconess Medical Center. Research is at the Massachusetts Institute of Technology, and it’s on building agency and capacity for the application of artificial intelligence in healthcare.
Hyunjung Gloria Kwak: Hi, everyone. My name is Hyunjung Gloria Kwak, I’m an assistant professor at Emory University’s School of Nursing, with a PhD in computer science. I research bias-aware modeling, social determinant of health, and simulation studies, integrating large-scale EHRs and multimodal data to improve decision-making and we introduce some preliminary projects and predictive models, representation, and real-world AI evaluation in healthcare.
Sebastian Cajas: And hi, everyone, my name is Sebastian Cajas. I’m a senior data scientist at the National Data Center for Artificial Intelligence. I’m currently specializing in generative AI and also in responsible AI, especially in areas such as multimodality.
Meskerem Kebede: Hi, my name is Meskerem, my background is in medicine. I work as a health policy and economics researcher at the London School of Economics. My research focus is largely on global health policy and economics, and currently I’m exploring the applicability of AI in this specific settings. Yeah, happy to be chatting with everyone today.
Daniel Smith: Well wonderful, it’s a pleasure to have you all and I’m looking forward to hearing your diverse perspectives on the topic of vibe research. So on that note, to get started could you all share your definitions of vibe research and include some of the key distinctions that set it apart from more traditional forms of AI-assisted research or even citizen science?
Sebastian Cajas: Sure. So the way we have been analyzing the context of it, we are defining vibe science, or more like vibe research, as an AI-generated or AI-shaped scientific work that has been usually optimized more to sound good and to sound plausible rather than with rigor and validity. And it often reads very convincingly and usually methodically and correctly sounded, but it lacks from empirical grounding and from a careful reasoning. And it probably comes very related with the vibe coding trend that has been defined by Andrej Karpathy recently. But basically it would be applied towards more the research area.
Hyunjung Gloria Kwak: So I’d like to add a little bit on that one. So some of us might think that this may sound like a bottom-up approach, evidence-based research that we actually already learned in data science, but there’s an important distinction. The kind of AI models behind the vibe science, especially large-length models, those are trained on the narrative research rich text such as the books, or conversation, or anything out there as articles. That means that they are naturally inclined to connect the dots into a story, even from a small fraction of the dataset, if it only feels like it is interesting or meaningful. So it is so different from the traditional bottom-up approach that we learn in data science, which generally rely on the patterns that appear across most and all of the dataset, and then we do the validation for that one. But the vibe science, it doesn’t do that way, the storytelling comes first and the validation may not even follow up later, or not at all. So those are two very different things that I wanted to point out.
Daniel Smith: So to bring it more into practical applications, could you walk us through a hypothetical or real-world example of a vibe research project? Are people conducting the entire scientific process using generative AI in this way or are there elements of it that they’re using generative AI for and then other more traditional forms of research? Just walk me through an example.
Leo Anthony Celi: So I’ll start us off. There has been a number of papers that have made it in the news that have demonstrated or proven to be entirely AI generated. There have been papers that have been submitted and then accepted into publications or conference proceedings where the authors reveal that they intentionally use AI throughout the entire pipeline, from generating the data, to performing the analysis, to writing the manuscript. We suspect that this is increasingly happening. There have been a lot of discussions about this impact of AI in the scientific process, but generally for some studies, especially those that involve machine learning, we can definitely outsource most, if not all, the different parts of the scientific inquiry and scientific process to AI.
Sebastian Cajas: Yeah, I wanted to say that in terms of hypothetical example, I think this is coming up very trendy and very popular today because it’s essentially creating the illusion that everyone can make research now. One example could be create a paper on renewable energy, AI will create a very beautiful report with all the details, and it will state everything very confidently. However, it will fail to cite a lot of the underlying and most important factual papers that have correct information because there is a lot of problem with fact checking and alignment of the real source and what is being said by the LLM, and this is very dangerous I would say. So even the LLMs due to their hallucinations tends to contradict itself or prove things that just don’t exist. So definitely this is the biggest weapon of double edge because there is a need for very deep fact checking, and as Leo mentioned definitely a consensus where the methodology could be improved rather than worsened using AI.
Daniel Smith: So just going off of that a bit, you mentioned AI hallucinations, which is one concern. But what are some of the other major ethical and methodological hurdles that vibe research introduces? For example on the note of AI hallucinations, how do you handle things like that and how do you ensure that the results can be independently verified and replicated?
Meskerem Kebede: Before I come to your question, Daniel, I just want to highlight the fact that there is [inaudible 00:07:55] in productivity and research outputs in the past few years, that’s the only thing I have seen, and that is driven by a number of things. But definitely in a research career it’s very important that one publishes and gets as much output as possible. And I think that is a really important push factor for why this could be a phenomenon going forward. And does that capability of LLMs improve how they’re writing the output they produce gets so much better in a way.
So slightly methodologically as you’ve just mentioned, one of the issues is definitely hallucination, but it’s also the fact that sometimes, yes, in the research world we do read papers and we do try and see what we’re going to learn from this paper for a specific study or research projects that we’re doing, but here it’s definitely just recognizing patterns and putting pieces together. So sometimes the problem is you see a paper and you think oh, the output is incredible, let me see how I can reproduce that. So we’re seeing quite a lot of challenges with reproducibility of different methodologies. And I think ethically as well sometimes in terms of using people’s or individual’s work for your own research or sometimes as if this person did it in their study is some of the key problems.
Leo Anthony Celi: I’m just going to add to what Meskerem pointed out, the problems of vibe research are not new. What the vibe research or vibe science is doing is truly enhancing, enabling, scaling the flaws that we’ve seen with research as a whole. And a lot of the issues that AI is now shining a spotlight on can really be rooted to the publisher parish culture that we have been lamenting about for the last, close to 100 years now, according to research. And in the presence of these very perverse structures, infrastructures, and incentives what’s going to happen is that very quickly AI is going to accelerate the destruction of institutions that in the past have supported scholarly activity and research.
So to us, in order to avoid the destruction of research, we need to transform those incentives, those infrastructure, those ways of operationalizing scholarly activity very quickly. And we should be grateful to AI for giving us this opportunity, allowing us to rethink, allowing us to reflect, allowing us to re-examine what have we been doing in terms of knowledge creation, generation, validation, and dissemination. And perhaps we could dwell on this topic a bit more, and I would love to hear perspectives from Latin America, from Africa, from Asia, and from North America on this particular topic or problem that I’m highlighting now.
Sebastian Cajas: I just have a last piece perhaps to add to the question. I was just thinking about the ethical concerns, and I think the biggest problem is that LLMs are just trained on a bunch of internet data. So the biggest problem is actually on the data itself, the amount of biases existing on it. And that is going to lead to bias amplification because of the distribution of the data, the models are going to tend to probably highlight highly cited papers that are probably not precisely good science. So the models themselves still need to reshape the way they think. That’s why there is a new area where we are trying to lift actually the LLMs, because they’re just memorizing, towards more intelligent models that have other qualities, such as in this case a way to solve that is embedding curiosity and the lack of humility, where we can allow the model for example to cite more diverse papers that would actually improve the fact checking since it’s going to be an amplifier for the minority of perspectives that can break through this vibe research or vibe science prompts.
Daniel Smith: So it sounds like there’s some opportunities with vibe science but there’s also a lot of concerns, and as Leo was mentioning, it highlights or exacerbates some existing issues that have been present in research for many decades. So looking ahead, what do you all think the ultimate trajectory is for vibe research? In its current state should it be avoided altogether or could it still serve a meaningful purpose?
Hyunjung Gloria Kwak: So I don’t think the vibe science should be avoided, but it needs to be used with a clear understanding of what it is, as any other AI models or any other AI-based systems. Just as I would read any manuals before using any new piece of equipment or any devices, we should know how these models work, what they are good at, and what kinds of mistakes they tend to make. So when we use really well vibe research could be still a great supporting tool for human beings. The AI can surface earlier or even something we didn’t expect, signals or patterns, and human can bring some domain expertise or critical thinking, critical judgments, and then some of the testing with validation. Then it could be turned to solid findings, then it’ll be one way to complementing ourselves with strengths, rather than just try to replacing human beings with some of these AI tools.
Meskerem Kebede: I completely agree with what you said, and I think there is a lot of opportunities to be used by researchers worldwide on utilizing AI and LLM specifically for other tools. And I think there is quite a lot of applicability around looking at the literature, trying to get different sort of insights, for example interdisciplinary insights from disciplines that we’re not very comfortable with. So I see quite a lot of benefits, but I just want to digress a little and say this is where, for example our education system, our research training, and all these different infrastructures need to really come together and think about how we hone our training, or how we supervise, or how we mentor early career researchers, and how we prepare ourselves for this era in general of how to work with the LLM and see where sometimes our effort can be multiplied in a way, rather than try and duplicate some of the work that already LLMs can do very well as we’ve just discussed.
So I think it’s a critical time for everybody in the research world, wherever in the pipeline when person is located, this is a very good opportunity to think about then how do we really bring back some of the elements we talked about ethically, methodologically in terms of critical thinking? Generally that’s the reason why we do research around bringing in nuances that are only possible from a human input and from the human expertise.
Leo Anthony Celi: It’s very frequent that we hear this question of whether we should allow learners to use AI. And there has been and there will continue to be ongoing debate. We are in the camp who thinks that we should allow learners to use AI, but to use it thoughtfully and with purpose. And this is where the incentives come in. If the perverse incentives are allowed to continue then we think that it is likely that vibe research, vibe science is going to hurt us. But what we need to focus on is how do we build faculty, build agency, how do we educate learners on how best to leverage AI into improving the way they identify the questions to ask, translate that question into a study design, perform experiments and sensitivity analysis, ultimately interpret the findings, and translate that into action. But that would require a re-imagination, redesign of existing systems for education, for research.
We still see a light at the end of the tunnel, but it will require a huge political will to transform the way we think, the way we learn, the way we work with each other. It will require evolution of systems that are ossified, that are known to be resilient and resistant to transformations. What exactly do we need to do? We don’t have the answer. This is such a small group to be able to come up with the craziest ideas on how to redesign education and research. We could for certain say though that it will require a much larger inclusive group of people who don’t think alike to be able to pave the road, to be able to even imagine what that blueprint might look like.
And this is the legacy of AI, it’s making us truly question the status quo, and that us is becoming a bigger community. It’s no longer just a few philosophers or a few academics, now we are able to engage younger people, and we are also in the camp who thinks that young people will play a crucial role in this transformation, in this AI revolution, in this period of great reflection, because they have a very different view of the world compared to us who already are hardwired to think that the world can only operate in a certain way. So we have been truly trying to convince those who make decisions, those who are allocating funding, to allow the young people to Trojan Horse their way into the systems. And I think that we can owe AI this opportunity.
Alexa McClellan: I hope you’re enjoying this episode of On Tech Ethics. If you’re interested in hearing conversations about the research industry join me, Alexa McClellan, for CITI’s other podcast called On Research with CITI Program. You can subscribe wherever you listen to podcasts. Now, back to the episode.
Daniel Smith: So a few follow-up questions. One is in its current state do you see vibe research as being more useful for certain types of research, and what would those types of research be? And on the other side of that question, are there areas of research where maybe this isn’t the best approach at the time, although there may be future promise?
Leo Anthony Celi: Okay, I’ll start. What parts of research can the current state of vibe science, vibe research be suitable for? I think that would be on reviews of topics that are relatively not new, I’m talking about narrative reviews, scoping reviews. I think that a team working hand-in-hand with AI agents, AI tools, model context protocols, vision language models can perform that within 24 hours. And this also gives us an opportunity to do this repeatedly. So in the past if you want to do a systematic review or scoping review of sepsis, it gets done every five years. One, no one is really interested in reading reviews of a field that is not moving so fast.
But the availability of AI tools would allow us to have what we refer to as living documents that will capture what is the most up-to-date information if we had done a review on that particular topic. So reviews are generally information retrieval, and information retrieval is one of the tasks that AI is blowing us humans as competition. And I think that if you are doing this type of scholarly activity without the help of AI, you will get trampled on, you will not survive the field.
Where would it not work? I think the area where it would not work is when generation of synthetic data is part of the vibe research pipeline. There have been a lot of papers showing that the data that is generated from existing data and using that to discover findings, it’s not going to work. And the problem with that approach seems very obvious, the information that is lacking is typically different from the information that is available. And creating synthetic data based on information that is available will not address the blind spots of the existing data. And trying to model synthetic data to generate new findings is just not going to work. And if you keep doing this it’s going to get worse and worse, because we know that whatever we publish now becomes fodder, becomes training data for the next generation of generative AI. And over a few iterations what we are going to discover when it comes to knowledge is going to be way too far off from the ground truth that we are hoping to understand.
Sebastian Cajas: Well I think even as of today, AI in general is very good for creating very quick reports for example, and I think this could be amazing for exploratory or for a hypothesis generating research. I still believe that AI and LLMs still lack a lot of creativity and curiosity to create cross-domain expertise ideas for example, so I think in a lot of companies where they need to do this sort of, the exploratory or hypothesis analysis or understand their own data, I think this could be good because it’s kind of like a draft. I would definitely, I think this could be encouraged for low stakes use cases, so I think this could be something where there is no for example risk of high decision-making like in medicine, or in nuclear policy, or disaster response, things that are extremely sensible.
And what Leo mentioned about the synthetic data, I think this could also probably uncover another field, and it’s all fields where the small data effects could create a lot of biases. Because I agree totally that synthetic data is something that should be taken care carefully. So one example perhaps on that could be climate modeling because for example predicting an earthquake, it’s going to be extremely complicated because we don’t have enough data. So definitely, and if we prompt AI to write a paper about that I’m sure it will not be very precise.
And definitely we should avoid things where biases can be amplified. So connecting to the previous question, I think things like law cases could be very much biased, especially when there are politicians involved for example because there’s a lot of biased data where the model was trained on, so it will probably fail. So fields that are prone to bias are definitely something to avoid with vibe science.
Meskerem Kebede: I just wanted to offer some nuance to say that I think the key element to think about could be where human oversight is needed. For example I work quite a lot with qualitative data and my research, and we do experiment, try and see different sorts of text mining tools and how they can be, as Leo mentioned earlier, used to sort of facilitate the process. And that’s extremely useful in the initial beginning stages and can definitely make the process quicker. But places where you need context, when you need to think about the depth of interpretation in a way or offer some sort of reflexivity, that is where it becomes slightly challenging to rely on these tools and where human oversight is really needed.
So it’s really to say that for example in the context of qualitative data, whether that’s analyzing people’s interviews or whether you’re analyzing documents, whatever different sort of qualitative data you’re analyzing, unless your goal is to create a summary report then that engagement of human researchers at the different stages, and thinking about the theory, and thinking about the interpretation, and thinking about the audience, and thinking about us as individuals as researchers, is really important. So for me in general, I took a long-winded response to say that it’s really necessarily not this research type or that research type, it’s really whether the stage where human engagement and insight is needed in the process. And I think in terms of facilitation I still hold my ground to say that it facilitates quite a number of things, but it’s really important to make sure that humans are critically engaged in the different process and keeping in line with the etiquette and processes that we need to follow, sensitivity information that we need to safeguard, all these things around data security and privacy.
Daniel Smith: And building off of that, you mentioned the importance of human oversight and keeping a human in the loop. It just leaves me wondering, based on this conversation it seems like there’s a lot of opportunities for vibe research to potentially weaken the public’s trust in science if it’s not done appropriately. So what I’m wondering is what should researchers be thinking about or what can they do? What tools do they currently have at their disposal to help uphold the public’s trust in science when they’re conducting research in this manner?
Leo Anthony Celi: I think that one way of regaining society’s trust in us researchers is to engage society in the way we actually do research. So the phrases participatory research, citizen scientists have been suggested as a way of doing that. And this is something that our group has been actively engaging in, this idea of bringing in non-traditional actors to be involved in producing research. And we actually don’t like the word research. We think that research has a tendency to intimidate, it has a tendency to be taken as misinformation. So the word that we prefer is learning because learning should be part of what we do on a day-to-day basis. Learning should be a part of what the ordinary people should also be doing in their everyday life.
The problem is there’s no space for engaging society at large in truly co-designing a lot of the research that we do. And perhaps that’s where we should invest in, how can we design new systems for research, new systems for learning, new systems for education that will involve people who don’t normally think about these issues as co-designers, as co-architects? Again, this will require a huge political will. This will disrupt existing power structures, hierarchies of knowledge, and as you can imagine, this will be resisted by the ones who are in control of the knowledge systems. But perhaps AI is the opportunity that we have been waiting for.
It has gained and it’ll continue to gain more steam as we move forward, the amount of investments is just exploding over time. I think at this time they’re saying that it’s in the vicinity of half a trillion dollars of investment, in which case people might be more willing to listen to different ways, novel ways of doing research, because we know that the traditional methodology takes too long of a time. And by the time that a paper is published it’s likely that whatever was discovered, whatever was validated, will no longer be true. Just to summarize what I said, vibe researchers should engage people who normally are not part of science, who are normally not contributing to research, to solicit ideas, to co-design the research itself. And something that-
Sebastian Cajas: We were actually thinking about writing a potential solution for this, and it’s like how can we generate how to make a more methodological science? And probably the best answer is via something that we can ensure replicability. So it’s not just about only writing the text and letting the AI do all the job, but maybe a best science for papers should be to create some sort of Oracle hub, that’s how we would call it, where we would ensure replication via different data collection methods. So research will not just be creating text but also ensuring the code works, that the code will be available, that the data will be available, and ensuring that this will always be part of the methodology.
There are many tools, what Leo mentioned is I think one of the key sections, having diverse teams where we can encourage multiple domain experts to reduce the amount of hallucinations and flaws, but also I think one of the key parts is we need to have more tools for fact checking. I think AI, all the big companies working on LLMs today need to strongly work on that in the fact checking section. And especially think about the fact that models today, especially LLMs, lack a lot of curiosity, and models are not able yet to cross-domain and replicate Nobel Prize mindset such as Einstein or any of the other big names in the world. And the key for them was to be able to combine things like maths and law for example for economics. That’s how brilliant minds have been able to depict huge [inaudible 00:32:01] that have actually changed the history of humans.
So I think that’s one of the key points that LLMs lack, and we shouldn’t trust them yet enough on a system that only relies on text. And we need more robust models that have all these qualities, and hopefully we will get there at some point in the future.
Meskerem Kebede: I would like to add a final point while agreeing with what both Leo and Sebastian said, unfortunately quite a lot of the burden does rely with the researcher, and the researcher is incentivized by how they’re rewarded for their work. So I think institutions definitely need to have that opportunity to incentivize integrity, to promote this openness, transparency people to share how they came up with this result, and all these things. So definitely a lot of the burden lies with the researchers. And I think sort of like in tangent with what Leo said, there is a lot of opportunities for us as researchers to now communicate with the public, whoever they are, whether they’re the general public or they’re clinicians for instance, or policy people in a way. So there are different media that could be leveraged to communicate our research to people who are likely to use it.
And I think that’s a really key part of how we might be able to promote trust in the science that we do, whatever discipline that is, have those opportunities to engage. Instead of just focusing on publishing on a prestigious journal looking at impact factors, looking at citations. So opportunities to engage with the public in whatever shape or form is a really key element of research, so I think there needs to be more emphasis from all of us as researchers, as well as from institutions that hire researchers, fund researchers, and promote science in any way. So I think there needs to be quite a lot of emphasis on that bit around the training, around promoting such practices. So I think communication and finding those media that the public engages in with science is really important, and that’s where the public is. So we need to meet the public in these spaces, not just the traditional outputs that we’re used to in terms of journals, and scientific conferences, and whatnot.
Daniel Smith: So we’ve covered a lot of ground today so just a couple of final closing questions. One is do you have any recommendations for additional resources where listeners can learn more about vibe research and the issues that we’ve discussed today?
Leo Anthony Celi: My advice is to continue reading the news, but read the news with other people who have a different opinion, different expertise as you do. The question is what can we do as a community to encourage this type of behavior? But there will be no set of resources out there that will allow us to really stay two steps ahead of what’s happening with AI. And what’s happening with AI is being ushered in by a few companies who have the biggest stakes when it comes to the skin of the game. They are the ones who have invested the most money, they are the ones who have the most to lose, and we need to take the steering wheel from them because they cannot be the ones who will be driving this revolution. We cannot allow that to happen, and our best immunity, our best bulletproof vest is to engage in more critical thinking. The way we promote critical thinking is to bring people who don’t think alike together. That is the main ingredient, the main recipe, that is the magic sauce that will deliver us from this purgatory or even the start of hell.
But we are truly appealing to everyone who is listening to change the way you have lived your life, it cannot be the same way that you did a year ago. You need to reimagine what your work might be like a few months from now as your companies and hospitals start laying you off because they are increasingly convinced that an AI would do a much better job than you do. And for that reason, as I said, we need to stay two steps ahead. We cannot follow a capitalist narrative in terms of paving the way for AI to be part of research and scholarly work. So once again, we appeal to the listeners to continue being on top of the game, continue putting your finger on the pulse to know what’s happening in the world right now.
Daniel Smith: And on that note, does anybody have any final thoughts that we’ve not already touched on?
Hyunjung Gloria Kwak: So I also use AI every day in my research and see all the potential there, but I’d like to point out one more time that AI itself is not the truth. And it generates the possibility but not the facts, so we need to still test all the time. And then our choices in how we are going to use our AI models will shape the world feature generation. So not only the physical or informational illusion, but also how they live in, or how they think about, or how they decide.
I almost feel like it’s like the past industrial decisions that left all the visible environment sector damages. It’s like today’s technological choices are leaving the invisible mark on our planet, on our shared mental or informational environment, which is almost the same but just a little bit in a different way. So if we really act with care then we can still use the AI strengths without eroding the trust in knowledge or the trust from the public. So the standards that we set now are going to really determine not just tomorrow’s facts, but also the mindset of ourself and the decision-making that we are going to do after this. So I really like to point out that those are the really important things that we need to think about.
Sebastian Cajas: I think that across the history of human beings we’ve seen science correcting itself, and that’s the purpose of science. And it is happening right now with hallucinations, with biases, and these are the errors that we are seeing more palpable. So at the end of the day I think we need to continue the loop of science, which is that we need to have a system that can be tested, that can be challenged, and it’ll be redefined. And I’m sure that the goal of this is to ensure that we will be able to ask more questions, and at the same time we are able to check the evidence, but also that we will be able to move faster. Because I think that’s the ultimate goal of science, while trying to keep the truth in the center.
So I think AI is an amazing opportunity and the vibe science is not something to be afraid of because we can use it as an amazing tool for discovery, and we should just be cautious that right now we just need to preserve our qualities, preserve like as Leo mentioned as well, the criticism, or complex reasoning, or humility to always ask questions and make sure that we cannot fully trust these sources. Because I think we are still in a phase where we need to be responsible with the science that we create and we must develop new methods, and that’s how we will keep up. And there will be layoffs, but I think there will be more scrutiny, and we need to evolve and redeploy things smarter reusing the tools we have, the things we have, and I think we will just make more amazing science and we just need to reshape how we are working today.
Meskerem Kebede: I concur with a lot of what Sebastian has just said, and just to add that I think for everybody engaged in research in any shape or form, there is quite a number of resources out there where we can, instead of just using these different tools that we use in our everyday research as Gloria mentioned earlier, understanding whether or not we’re engineers or whether we’re not data scientists, it really is quite a lot of information and resources out there for us to understand the tools that we’re using, and how they come up with the responses they come up with, and what sort of inputs they’re using to come up with such responses is very useful for everybody who’s engaged in science, or is a scientist, or aspires to become a scientist. And it helps us think about how do I as a person engaged with this tool and get more use out of it, but definitely also make sure that I have all these necessary safeguards that I need to protect my research.
So I think we’ve definitely discussed about the important things around institutional guardrails, but as individuals there is quite a lot of resources that are out there for us to use to really not understand these tools that we’re using, and I think that’s incredibly critical to understand that this tool that I’m using, whether it’s LLMs for example, whether it’s ChatGPT or other forms, how is it coming up with the resources that it’s coming up, and how do I get the most use out of it? It’s really an important exercise for all of in the research world.
Daniel Smith: And that’s a wonderful place to leave our conversation for today, so thank you again, Leo, Meskerem, Sebastian, and Gloria.
Sebastian Cajas: Thank you so much for having us.
Leo Anthony Celi: Thank you so much, Daniel.
Meskerem Kebede: Thank you, Daniel.
Daniel Smith: If you enjoyed today’s conversation, I encourage you to check out CITI Program’s other podcasts, courses, and webinars. As technology evolves, so does the need for professionals who understand the ethical responsibilities of its development and use. CITI Program offers ethics focused, self-paced courses on AI and other emerging technologies, cybersecurity, data management, and more. These courses will help you enhance your skills, deepen your expertise, and leave with integrity. If you’re not currently affiliated with a subscribing organization, you can sign up as an independent learner. Check out the link in this episode’s description to learn more.
And I just want to give a last special thanks to our line producer, Evelyn Fornell, and production and distribution support provided by Raymond Longaray and Megan Stuart. And with that, I look forward to bringing you all more conversations on all things tech ethics.
How to Listen and Subscribe to the Podcast
You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.
Recent Episodes
- Season 1 – Episode 35: Managing Healthcare Cybersecurity Risks and Incidents
- Season 1 – Episode 34: The Essential Role of Bioethics in HBCU Medical Schools
- Season 1 – Episode 33: Integrating AI into Healthcare Delivery
- Season 1 – Episode 32: Modernizing Clinical Trials with ICH E6(R3)
Meet the Guests
Sebastian Cajas, MSc, BEng – MIT Critical data
Senior AI Scientist at CeADAR with expertise in generative AI, quantum machine learning, and responsible AI. Former Fellow in Computer Science at Harvard, leading projects in multimodal AI, AI safety, and federated learning. Passionate about applying AI to healthcare, education, and high-impact societal challenges.
Leo Anthony Celi, MD, MS, MPH – Massachusetts Institute of Technology; Harvard Medical School
Dr. Celi is the principal investigator behind the Medical Information Mart for Intensive Care and its offspring, MIMIC-CXR, MIMIC-ED, MIMIC-ECHO, and MIMIC-ECG. With close to 100k users worldwide, an open codebase, and close to 10k publications in Google Scholar, the datasets have shaped the course of machine learning in healthcare.
Meskerem Kebede, MD, Msc, MPH – London School of Economics and Political Science
Dr. Kebede is a clinical and health systems researcher at the London School of Economics. Her work spans global surgery, digital health, and policy innovation. She brings expertise in evidence synthesis, AI-driven research, and international collaborations to inform equitable and sustainable health systems globally.
Hyunjung Gloria Kwak, PhD – Emory University
Assistant Professor at Emory University’s School of Nursing, with a PhD in Computer Science. She researches bias-aware modeling, social determinants of health, and simulation-based studies, integrating large-scale EHRs and multimodal data to improve decision-making, and leads interdisciplinary projects on predictive analytics, representation, and real-world AI evaluation in healthcare.
Meet the Host
Daniel Smith, Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program
As Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.