Training in Ed Podcast – Research, Tech Ethics and AI Innovation with Bharat Krishna
Bharat Krishna, Managing Director of CITI Program, joins the Training in Ed podcast and explains how CITI Program assists in educating people on effective research methods and providing training on established best practices. Under his management, CITI Program has addressed the impact of COVID and the emergence of generative AI, which has raised ethical concerns in the business world. As technology continues to evolve rapidly, the challenge is to establish safeguards. We can gain insights from the wisdom of animals and speculative fiction about AI’s future. Tune in to discover more.
Click to expand/collapse
Mike Palmer: Welcome to Trending in Education. This is Mike Palmer. Really happy today to have someone I’ve been trying to get on the podcast for a little bit of time now. I am joined today by Bharat Krishna, who is the Managing Director of the CITI Program, which we’ll learn a little bit more about later on. Bharat, welcome to Trending in Education.
Bharat Krishna: Well, I’m glad to be here, Michael. I’m looking forward to this conversation. You and I go back a few years and I think this is going to be quite an interesting chat. I’m a very avid listener of your podcast, so happy to be on the show.
Mike Palmer: Awesome. I guess the pressure is on and it’s also off. If we don’t have rapport by now, it’s probably not going to magically appear. We always start by asking guests for their origin story. How did you get to this point in your professional life? Can you catch us up on how you got to this point in your career?
Bharat Krishna: Sure. The good news, it wasn’t very programmed as many other stories, it was a bit of an accident, a bit of good luck and certainly a little bit of effort for sure. But my professional story is joined the workforce during the.com era, was a media/semi technology sort of person, worked at IBM for a few years and before I went back to my business school sort of years. And when I left business school, I went into consulting with McKinsey for a few years and really practiced my strategic thinking skills, advisory skills at the C-suite level, which was kind of an interesting and fascinating, you know what a lot of people call finishing school for MBA, it was really great, and my clientele in those days were predominantly in the media publishing, tech, et cetera, sector and education.
And when I left, I joined Kaplan where you and I overlapped for several years. I spent almost a decade at Kaplan running different online education businesses, including some international test prep units and really learned a lot about online education and got to meet some amazing people, great culture at Kaplan. And about five years ago, I had this opportunity to join the CITI Program, which I’m happy to sort of explain a little bit about. It’s a very unique program. It was founded at a university and drew very quickly, and the founder who was a professor when he founded the program was in the process of retiring and they were looking for the next sort of generation of leadership to take the program to the sort of next level. And it was such a unique place that I felt like it’s something that I should give it a shot. Now I’ve had a great ride so far over the last five years. It’s a great team, it’s a great program having a lot of impact.
Mike Palmer: The CITI Program sits in an interesting place. I am someone who likes to read acronyms, so it’s the Collaborative Institutional Training Initiative. It’s referred to also as the CITI Program dedicated to serving the needs of colleges, universities, healthcare institutions, technology and research organizations and governmental agencies as they foster integrity and professional advancement of their learners. That’s an interesting cross-section of folks who you support. Can you describe a little more what CITI does?
Bharat Krishna: Sure. So the origin actually sort of goes back to about 20 plus years ago and the history of why the program was sort of relevant goes much further back. So if you trace back the foundation of CITI Program, it’s a very utilitarian program, and that was sort of its genesis. It really starts Second World War medical ethics and the abuse of human subjects during Second World War in sort of not having any really international conventions on how to treat patients, et cetera during experimentation. And the US also has a long history of not conducting necessarily ethical research. There’s certainly the Tuskegee experiments and examples in the past.
Mike Palmer: Milgram.
Bharat Krishna: Exactly.
Mike Palmer: Martin Seligman’s kind of turned his act, around is now about flourishing, but his dogs weren’t exactly flourishing when he learned about learned helplessness.
Bharat Krishna: So I mean, as a species we’ve come a long way in understanding the rest of the world around us and we are all curious by nature and research is a very important human sort of endeavor, and there weren’t necessarily guardrails on what is good research and was ethical research. And a lot of that started to really get spearheaded with a report called Belmont Report a few decades ago, post-World War, and all that translated to, really US leadership because the US as an entity around the world is one of the largest funders of primary research. And they started discussing, coming up with a pretty good and robust training and regulation framework. It’s not very rigid, but there’s a lot of training and institutional reviews, et cetera that are mandated by federal government when it comes to research, particularly when it uses federal government dollars. But also in biomedical research if it doesn’t include federal government dollars and it’s private research, there’s still lots of guardrails and rules and regulations on how you can and when you can test with human subjects.
And so CITI Program really started, as I jokingly state, it started by training researchers and staff on how to protect human beings from other human beings, conducting research. Program really expanded over the next couple of decades looking into protecting animals from human research. And so we have a lot of content on ethical animal research training, what are called the [inaudible 00:05:38] Cooks, which are like the institutional review boards, but for research conducted with animals and training members of that ethics board at institutions. And off late, we really went further afield and have developed a lot of content on safety, biosafety, bioterrorism, dual use research where the research might be fundamentally targeting a certain type of pathogen, so to speak. It could also be used for bioterrorism. So there are rules and regulations and how those are handled.
And off late we have evolved to train people on how to protect humans from technology. And I think that’s kind of really where the program is going. And it’s very exciting both to see the possibilities as well as the ethical quandaries and issues that we as a society face as we look forward to so many new emerging technologies, which can both do a great deal of good as well as a great deal of harm. That’s kind of where we sit. We’re an online training provider, but we are niche and we serve a very unique space at the intersection of research, ethics, compliance, regulations, safety, and a lot of institutions that use CITI Program to train their researchers, staff, undergraduate students, doctors, nurses, et cetera. They are in hospitals, they’re in academic medical centers, they’re in the higher education industry around the world, mainly in the US but certainly around the world and also in biotech, pharma and increasingly in technology companies, especially as we enter into some of these tech ethics type issues.
Mike Palmer: Yeah, it’s a fascinating space. It is both a learning company, you’re focused on education, but then at the same time, you’re also forced by the nature of your work to stay current on emerging trends. To that end, medical research was really profoundly transformed through the pandemic years and now technical research and understanding technical ethics is being transformed by the recent wave of generative AI technology that’s become very front and center in terms of the topics we talk about on this show. I’d love to get your thoughts maybe on both of those things, beginning perhaps with the pandemic.
I am officially of a mind that the pandemic is over now. I was a slow adopter. I was living in my bunker here in Brooklyn and being very conservative about my time out amongst other humans, but we were both at a conference in San Diego, the ASU+GSV Conference, and I think I was cured by just immersion into a sea of humanity, still feeling okay. So I think we can safely say that the previous pandemic frame is kind of in our rear view mirror. We know these things can come back and I know it’s something that folks need to stay on top of. How do we do this research? Where’s the innovation happening? How do we do it ethically? Any thoughts on the role CITI has played and some of the things you’ve seen really around healthcare and medical research? And then from there, I’m certainly going to want to dive into the AI side of things.
Bharat Krishna: Oh, sure. And I hope that you are absolutely right that it’s in our rear view mirror. And I think as a society, we’ve learned so many things good and bad coming out of this poll, a few years during the pandemic, and I think we’ll probably be living it for several years to come in terms of the repercussions of how we sort of saw or adapt to certain things. For me, joining the CITI Program just a year and a half or so before the pandemic, it almost felt like a Forest Gump moment because we were here at the intersection of a pandemic where there was a pathogen spreading around. There was private public partnership to try and solve it in a very quick fashion. There were a whole bunch of ethical issues around the speed of development of vaccines or other cures. There was a lot of research being put out there with possibly not as much sort of peer review or accuracy. There were lots of medical claims, there was a lot of fraud, there was a lot of social media sort of frenzy around cures, et cetera, et cetera.
We stuck to our approach. We’re not conducting research ourselves. We are here in a sort of meta framework. We help research institutions, be it in sponsored research and private entities or with publicly funded research. We’re here to share best practices. We’re here to agnostically help institutions stay out of the headlines when it comes to medical malpractice or bad research. And there are many of those stories that come out and sometimes they come attached to multimillion dollar fines and loss of reputation and destroyed careers. We’re here to look at the long range view, and that’s exactly what we did during the pandemic. Our team was excellent. They buckled down and first of all, we were thankfully already semi remote, and so we could go remote relatively quickly.
We partnered with the American Association of Medical Colleges, AAMC, and we generated relatively quickly a COVID training program for colleges and universities, how to reopen campuses, and being collaborative institutional, being part of our acronym here, CITI Program was founded by experts from multiple institutions collaborating on creating content. So we don’t sit in a bubble creating content. Most of our member institutions have excellent experts and we provide a platform and a editorial process, and we run an academic journal, so we have a peer review process as well. So we have hundreds of folks and experts from around the country who are very willing to help CITI Program because it’s a way in which they can give back to the rest of their community of peer institutions.
So we had several universities, hospitals, big name entities with contributing authors who came, and we actually developed a series of content on how to participate in vaccine research, how to reopen campuses, how to redesign labs such that you could have social distancing, et cetera. We did sort of pivot very quickly and provided a totally free content for our subscriber base related to the pandemic and help in whatever way we could. And we had hundreds of thousands of learners who actually trained with CITI Program on some of the pandemic related content, and we were really serving that purpose of sharing content from one institution to another in our peer group that we have in our subscriber base.
So that was great, but stepping back from the pandemic, I think it really made us realize how important our role is in making sure research is ethical and that taxpayer dollars aren’t wasted on federally funded research. And that sponsored research can stand behind the results that it often publishes. And I think CITI Program and our volunteers and our author networks and our peer review networks, subscribing institutions, they all play a big part in making sure we as a community train our institutional review board members, train the peers, train our faculty, staff, students who were engaged in research such that we all follow some basic guidelines as well as good guardrails on what constitutes good research. And I think that pandemic really was a wake up call for the research industry and for CITI Program and the role that we play in all this.
Mike Palmer: The power of online learning certainly was put front and center in the early days of the pandemic. CITI Program, citiprogram.org is the website if folks do want to check out what’s going on there, you can see a lot of what Bharat is talking about. Social media and in the media writ large, there were a lot of problems around misinformation and there are a lot of fact checking organizations and groups that have emerged to kind of validate and steer clear of some of the dangers of all that. You and team are fulfilling a similar mission when it comes to the research infrastructure, that’s really getting through the pandemic phase. Lots of lessons learned there. The next phase that we’re in now, it’s like out of the frying pan into the fire, out of the pandemic, into the AI renaissance that really has kicked off last November with the launch of ChatGPT-3. This is a place where technical ethics, in addition to medical ethics, is something that is becoming much more front of mind for a lot of us.
So much so that there was an important letter that was issued recently requesting a moratorium on the development of generative AI tools. It’s a bit of an arms race now amongst all the different big technology firms to have their own large language model that is out there in the world around us. As someone who’s trying to make sure there’s ethical research being done about this, that there’s some thoughtful design around protecting humans and understanding how to advance science and research while also protecting humanity, it’s a pretty heady set of mission statements there for us to navigate. But I’d be curious, sitting where you do at CITI, how are you thinking about the large language models? I know you and I have even chatted about it. It is something that is certainly captivating all of our imaginations, but how do we roll this stuff out safely? How do we build the right guardrails around this because the stakes are increasingly high?
Bharat Krishna: Yeah, I mean, talk about a problem to have. I mean, the weight on the society’s shoulders has just sort of expanded quite a bit and there’s so many implications and we won’t sort of touch on all of them for sure. And I think CITI Program certainly can’t solve and be sort of front and center for all of that. We want to stay focused on where we are probably best suited as both an institution with a subscriber base and a learner following where we can have impact, as well as where our experts have the expertise, where we can enable that best practice sharing or guardrail formation. I mean, we are suddenly as a society very much in the guardrail formation phase, and I think in many ways we don’t even know where the guardrails need to be or where the boundaries might be exploited, but it’s really fascinating. It’s a great question, Mike. We’ll probably discuss this over several conversations as it evolves. This is going to evolve over the next several years.
I think AI has brought so many new possibilities and challenges all at once that I think we are going to be grappling with it for a while. On the higher ed side and generally the online education side, suddenly there’s a whole thing around integrity of the student itself, and what they submit, originality work because we train researchers on plagiarism and responsible research. And when you publish a paper, are you being original? Is your data reproducible? Are your results accurate? Can some peer reviewer test them? Those are some basics on research that we train our researchers on.
With this whole AI, generated AI landscape, we are now [inaudible 00:16:26] into academic integrity issues, which we typically in the past didn’t play a whole bunch of role in, but we are starting to get into that area and talking to experts and trying to understand. Some of the institutions are banning it on their campuses, other institutions are embracing it and trying to figure out ways around it. So which model is better for us? And I don’t know if ignoring the word processor or the calculator when it first came out would’ve been the right sort of way to go. But at the same time, we as a society figured out where to allow open books and where to allow tech calculators and where not to allow it so that we could have, depending on the learning outcome being desired, different sort of guidelines.
Mike Palmer: It’s an interesting contrast to the pandemic where the public health issues were trumping everything. Even though there’s been give and take around policies, it was pretty clear that there was a path to ensuring the health and safety of people. Whereas now your point about guardrails is a good one where what is the right set of practices? It’s difficult to really coalesce around best practices when it’s such a moving target.
Bharat Krishna: And I did have the luck of listening into Sam Altman’s brief speech at ASU+GSV when we were there together. And he made a point that suddenly there are two camps. There’s a camp that says, let’s not put generative AI in front of people. Let’s make sure it’s right. Everything is perfect before, and you’ve sorted out all the different criteria of where it could go wrong. So you have all the guardrails in place and get all your insurance policies in place before you turn on the switch. And then his view, which is, we may not have even know what the policies are unless we turn it on. And I think we need to turn it on in a relatively safe manner where there’s appropriate disclosures and we need to let people push the boundaries so that we actually know where the boundaries are that might cause us problems.
So I do think that the whole of humanity might be in the beta test mode for generative AI. It’s showing up in so many different spaces. It’s showing up in originality of published work, publishers of online journals. So just to give you a couple examples of things that are sort of under debate. If you co-author a scientific paper and you submit it for publication, can you list a generative AI as a co-author? But these are kinds of the questions and different journals are taking different approaches to them. And a lot of these issues are months and weeks old. They’re not years old. So it’s going to be interesting.
Mike Palmer: And then at the same time, the technology will continue to evolve. So even if we establish the right best practices about ChatGPT-3, by the time those are really solid, we’ll be on ChatGPT-6 and there’ll be new ethical ramifications about the use of all this stuff. And it is a bit of a double edge, which I imagine you understand very well in your role where on the one hand, we want to continue to move science forward, but at the same time we want to do so in a way that is measured and informed and is adopting the practices that you’re talking about. I also know that you produce a number of podcasts, and this is a podcast, so I wanted to make sure we got a chance for you to plug your pods.
Bharat Krishna: Podcast, we’re relatively nascent in podcasting. We have to shout out to you, Mike, for helping coach us as we develop our podcast series here. We have three really interesting podcasts. There’s one is On Campus, so we have Darren Gaddis, who’s our host of that podcast series. He speaks to several experts on higher education, campus related sort of issues. Let’s say there’s a change in the Title IX regulation coming up, or there’s a big debate on Title IX regulation changes. We’d speak to an expert on one of the campuses and share ideas and thoughts on where things are going and trends in that particular topic.
We suddenly talk about diversity, equity, inclusion, some of the other campus related issues that are coming up. We’re obviously not a very political organization. We’re here to help people on ethics and regulations no matter what the politics of the space might be. But we do cater to our institutions and we have institutions from around the country that use the CITI Program. And so we try to keep our topics where they’re really sharing from different perspectives and campuses. They exist in different contexts, geographically, socially, and so try to be a platform where we can share those ideas on key topics related to campus issues.
A second podcast series is on research, and we try to focus on emerging topic areas that are important in research. So the meta research area, so certainly AI and human subjects research, for example, is a topic on a recent podcast. How should institutional review boards or principal investigators conducting research using an AI component, how should they think about what has been for decades evolved as human subjects research, rules, regulations, guidelines? Obviously those rules, regulations and guidelines weren’t written or conceived with AI in mind. And so the entire ecosystem is now trying to adapt to that. So we try to talk to experts who are in the front lines, they might be actually an institutional review boards reviewing these protocols and have faced these issues before others have. And so we try to bring their conversations and thoughts to the general community through that podcast series.
And then a third one we have is on tech and ethics. And there we are really focused on topics like facial recognition, robotic medicine, the use of CRISPR, generative AI, really interesting topics. Again, topics where CITI Program has training material and webinars, et cetera. And our podcast series is just another way to reach our audience, stay ahead of the game in terms of what might be going on, and allow our institutions to really share their thoughts amongst the experts.
Mike Palmer: And it’s nice to have conversations curated and vetted through your network to be able to pull y’all forward and have your conversation engage with the experts who you work with on a day-to-day basis. It is certainly an interesting time. I’d like to get a little bit more of your perspective on maybe lessons learned so far in your role. What insights and perspectives do you have today? You mentioned some of your real trial by fire through the crucible of the pandemic. That certainly is an area where you’ve learned some new things, but what are some perspectives or some insights perhaps you could share for folks who are not as deep into the space that you’re talking about? One thing that I’m really struck by is the increasing prominence of ethics as an area of focus. As we think about skills disruption and what will need to continue to be human, that is a space that will both continue to be primarily human led and will continue to increase in its relevance.
I’d be curious if you have thoughts on that or if there are other things that from your vantage point you’ve really become more deeply aware of that you think our listeners would benefit from hearing.
Bharat Krishna: I’ll try. I’m not sure I’m going to say anything earth-shattering here. I think this might be a little bit common wisdom, but I do think that one thing all institutions should really recognize is increasingly people want to work where there’s purpose. And purpose is quite an important part of any organization. And I think we learned that during COVID with our institution and our team, and certainly looking at our subscribing organizations, et cetera, people are really driven by purpose. They want to know that we are doing good work and that any organization isn’t sort of net negative to society, but net positive to society in whatever way they serve it, whether it’s financial results or economic results or development results or health results. And I do think that institutions, whether you are required to or not, should be training and considering issues and sharing knowledge of issues amongst their different groups of skilled individuals who may be very highly skilled in certain areas.
But I do think anyone who’s, for instance, going into data science should have data science ethics training. And if you’re doing anything with regards to healthcare, they should be doing some healthcare data privacy training so that they understand some of the case studies of not just what the rules and regulations are, but why they exist. Why do we have certain rules and regulations? What happened in the past? We’ve got to learn from our history on non-consensual use of data or DNA or what happened? What were the repercussions of some of the case studies of when data was leaked or when X, Y, Z event happened?
As a society rely more on AI and let’s hope that this is a technology where the rising tide lifts all careers, and if we sort of look at it positively that way and that AI is going to lift all careers, everybody is going to be working on higher value, higher order things. And I think understanding where things could go wrong, where things have gone wrong in the past and why ethics matter in different professions is going to be important. AI itself is such a field rife with issues around ethics that we have to get right on access, equity, making sure the data sets that are training the AI are fair, and they’re not regurgitating past biases. So I do think as a society, we are at a cusp where we have lots of decisions to make, and we have become very powerful as a species with the technologies that are in our hands, that what we do with it is very important. And I think it means that everybody should have a little bit of history, understanding of ethics in any field, whatever field they’re working in.
Mike Palmer: Yeah, I like the connection between the need for purpose, which may be common wisdom, but then connecting that to developing skills around ethics and some of the higher level thinking that if you go back to even classical philosophy really across cultures, there has been a very foundational component to what it means to be human and what it means to be in education that involves really some foundational ethical components. It’s almost as if we’ve lost sight of that until the existential awakening. Eric Schmidt from Google was mentioning that the advent of these generative tools has been so profound. He was saying it’s the biggest awakening since the enlightenment where many more of us in casual conversation are talking about what does it mean to be human? How do we ensure that humanity continues to grow and thrive? And then how do we steer clear of some of the more dystopian scenarios that might be on the horizon?
It does remind me of a quick book recommendation, a book that I’m in the middle of. It’s called AI 2041. It’s 10 different case studies by a couple of ex-Google employees. One is now a science fiction writer. The other one is more of an AI expert, but they create 10 different scenarios, 10 different stories, and then they tell it from the perspective of a short story, and then they conclude with some explication of here are some of the topics and the themes and the implications. And interestingly, a lot of that does come back to how do we think about the ethics of this? How do we think about designing in a thoughtful way, where if you forecast far enough ahead and you understand some of the risk and opportunity, it can inform better some of the things that you can do today?
Bharat Krishna: That sounds fascinating, but I do want to go back and connect to one point that you made a little earlier, which is finding a balance, being in responsible research training and good clinical practice training, et cetera. We certainly don’t want to ignore the fact that speed matters in innovation. We want to make sure that science does progress as much as we could all sort of sit back and think AI is going to cause a giant episode of Black Mirror, it’s got an incredible power to solve diseases and things that we probably didn’t consider or contemplate that we would be able to solve are within reach hopefully. And we don’t want to sort of lose track of that. So I do think we need to find, as a society, we’ve got to find the balance between the speed of innovation and federally funded research in the US has really helped this country become one of the top research producers of the world.
And I think there’s a lot of anxiety as well with falling behind and making sure research is well funded and that research produces output. And that’s an important balancing act between the need for speed, innovation, putting products out in market, getting research out there while being careful not to put something too early, too fast, too dangerous. And I think that’s a balance we’ve got to learn to strike in more complex ways going forward.
Mike Palmer: It reminds me both of the success of Operation Warp Speed in response to the pandemic, and then building on the AI innovation component as it relates to healthcare, the breakthroughs in protein folding that really have been driven by the research apparatus. Nobody wants to be betting don’t pass at the craps table. We’re all rooting for the little bit of upside, hopefully, although a little bit of measured concern here where appropriate. It’s been an amazing conversation so far, Bharat, we’re getting closer to our conclusion. You’re someone who ever since I’ve known you, you’ve always been somewhat thoughtful about what’s on the horizon. I remember you talking to me about robotics education when your kids were younger. So as a parent, are there any thoughts you have more from a personal level where the world is heading, any trends you’re noticing, any things you think our listeners would benefit from hearing?
Bharat Krishna: I think the parent podcast is its own episode, but I’m as much a student as I am a teacher. And I think certainly that’s an area where shout out to all the parents out there going through the pandemic and seeing their kids evolve from there and build resilience, et cetera. I think again, that’s also been quite a learning curve for a lot of parents when we’ve certainly been there in that part of that journey. I think it with all the AI and generative AI, and we as a society tend to think that, hey, look, we’re at this juncture where this is fire. This is bigger than fire and this is bigger than the wheel, et cetera. But I’m of the mindset that there’s actually bigger things that are down the road.
I’m reading this book called An Immense World by Ed Young. It’s a book about how animal sense reveal the realms around them. So animals, insects, we go based on vision and audio. And when we talk about online education, when humans typically talk about communication, we’re very audio and visual spectrum biased. We don’t have to look for alien life as life on this planet that has such different capabilities in terms of sensing danger, sensing opportunity through chemical, touch, heat, expanded vision. I mean, there’s …
Mike Palmer: Smell. Don’t get me started on the olfactories. Forget about it.
Bharat Krishna: Oh, I mean this is actually good example. They did come up with some research where dogs could sniff out certain pathogens, et cetera. And I believe that’s an interesting sort of example of that as well. So I think when we talk about generative AI and we talk about AI chatbots, we’re really talking anthropomorphic evolution of intelligence here. So we’re imposing a human around language and audio and visuals. And I’d be very curious when we can start really as a species recognizing that there are other species that are far more tuned to their environment than we are now. Why do animals run away from a tsunami? I mean, if we could generate artificial intelligence that goes beyond mimicking or attempting to mimic human intelligence, I think that’s probably where humanity’s greater praise will lie. So I think there’s bigger fires and wheels for humanity to invent, and I’m very hopeful that we’ll sort of get there.
Mike Palmer: I love it. Yeah, you’re talking natural intelligences, other non-human natural intelligences. And to bring it full circle back to Black Mirror, the robot dogs episode is probably the most shudder inducing episode there, where in that case it’s more biomimicry. But if you do look at technology, where can technology take its lessons from the natural world and build those out, obviously with the appropriate guardrails and measures to keep us safe. We’re approaching our conclusion here Bharat, it’s amazing to have you on. You get a refrigerator magnet with your third appearance. So hopefully this broke the seal for you in a good way, and you’re feeling comfortable. We’d love to have you back on again down the road. As we conclude, I always like to give guests an opportunity for some closing remarks, some summation. Any thoughts for our listeners as they head back to the rest of their lives?
Bharat Krishna: First of all, a magnet sounds great. I’ve got to put some effort into this and have interesting subsequent conversations. Well, I think Mike, it’s always a pleasure to talk to you, and I think it’s great that we connected right after the biggest EdTech conference in the world that just concluded in San Diego. It was great to see such an oversubscribed population that showed up there after years of not being able to meet face-to-face, there’s a yearning for human connection. And I think that was apparent at that conference. There’s so many interesting trends going on in education and EdTech and in research. There’s also so much shuffling going on in geopolitics, and the importance of research is sort of paramount here. We live in interesting times. I think there’s a lot of opportunity, there’s a lots of problems that we can solve.
As a person who runs a niche online education platform and play, my sort of 2 cents here would be, not everything needs to be a solved problem for everybody. One of the things coming out of ASU+GSV is seeing the number of flowers blooming in this [inaudible 00:34:23] field. We happen to be also in San Diego at the same time that California has all these wild blooms. It’s really, may 1,000 flowers bloom kind of thing. And I think education is one of those industries, if I may call it an industry where we’ve always had thousands of universities, we’ve always had thousands of schools, we’ve always had thousands of institutions trying to solve unique parts of problems within probably the most important endeavors of what humans do, which is teach and learn. And I think we don’t need a couple of unicorns, so to speak, in the business parlance to take over EdTech and solve all the problems.
So I’m really interested in finding out more and learning more on what problems, lots of institutions are out there to solve, niche players, smaller players, education entrepreneurs with ideas to solve specific problems. And it’s very interesting to see how they tackle those problems. And I think we really live in interesting times with great opportunities, and I look forward to coming back and talking about other opportunities, problems, et cetera.
Mike Palmer: Great stuff here with Bharat Krishna, the Managing Director of CITI Program. Bharat, thanks again for joining me on today’s show.
Bharat Krishna: Wonderful. Thank you so much for having me, Mike.
Mike Palmer: And for our listeners, hopefully you enjoyed what you heard. If you did, please subscribe, tell your friends, write a review, do all the good things. We’ll be back again soon. This is Trending in Education.