Automating Scientific Research with AI | Bullaki Science Podcast Clips with Timothy Grayson
BULLAKI BULLAKI
4.97K subscribers
103 views
0

 Published On Jan 28, 2021

How do you think AI or advanced AI, super AI, will help scientific research in the future? Do you think we’re going to see more symbiosis between AI and scientists, maybe for things like repetitive tasks?

This is the last part of our conversation with Timothy Grayson.

As the director of the Strategic Technology Office at the Defense Advanced Research Projects Agency (or DARPA), Timothy leads the office in development of breakthrough technologies to enable war fighters to field, operate, and adapt distributed, joint, multi-domain combat capabilities at continuous speed. He is also founder and president of Fortitude Mission Research LLC and spent several years as a senior intelligence officer with the CIA. Here he illustrates the concept of Mosaic Warfare, in which individual warfighting platforms, just like ceramic tiles in a mosaic, are placed together to make a larger picture. This philosophy can be applied to tackle a variety of human challenges including natural disasters, disruption of supply chains, climate change, pandemics, etc. He also discusses why super AI won’t represent an existential threat in the foreseeable future, but rather an opportunity for an effective division of labour between humans and machines (or human-machine symbiosis).

CONNECT:
- Subscribe to this YouTube channel
- Support on Patreon:   / bullaki  
- Spotify: https://open.spotify.com/show/1U2Tnvo...
- Apple Podcast: https://podcasts.apple.com/gb/podcast...
- LinkedIn:   / samuele-lilliu  
- Website: www.bullaki.com
- Minds: https://www.minds.com/bullaki/

#bullaki #science #podcast
***
TG. I would certainly think so. Just off the top of my head, two big opportunities pop out. One is exactly what you described, you know, the tedious tasks. I’ll go back to my experience of doing quantum optics as a grad student. I don’t know how many hours I spent in a pitch black room twiddling with aligning mirrors and such, with a beam I could barely see. I used to think to myself, “Wouldn’t it be great if I could have robotic mirror mounts that arranged and aligned themselves on the optics table automatically.” I was ready to actually go patent this at one point in time. I thought, if you had these little robotic mirror mounts they could drive themselves around an optics table, and then had a big feedback loop that they could align this interferometer on your own. Boy, would that be great? Well, I say that half-jokingly, but you could imagine all kinds of different types of self-configuring scientific apparatus. That’s the equivalent to the fighter doing tactical maneuvers. Then that would free up the human researcher to think the bigger thoughts if you weren’t having to spend days upon days twiddling mirrors. So I think that’s one big area.
The other big area is to help develop or transfer intuition. Again, I think we’re a long way away from machines actually having intuition, but I think they can help humans with their own intuition. I think they can do a certain amount of transference of experience. Do it in such a way that allows researchers to explore and challenge hypotheses. The open-world problem is the challenge of coming up with a hypothesis in the first place. And again, I think that’s something that’s going to be the domain of humans for a long, long time to come. But once you’ve got a hypothesis, you know, humans are notorious for getting, you know, locked into tunnel vision, “I’ve got my hypothesis, now, I’m going to build my experiments and do my data analysis”, that in some cases, not all, but in some cases can actually be self-confirming. I’ve seen AI based tools that allow you explore other competing hypotheses that maybe are going to be wrong. In fact, they very well might be wrong. Because again, machines aren’t great at intuitive leaps. [But then again, those leaps may open up a new line of thinking for the human to get out of that tunnel vision.]
There’s interesting research going on right now in what’s called self-aware AI. So it’s not fully understanding why it came up with something, but it can at least say, “Oh, I got to this particular neural net”, or “I got to this particular outcome, based upon some particular model that was provided to me as input, or some particular data set that was provided as input”. Tools like that will allow researchers to go say, “Oh, I’m stuck in tunnel vision, I’m actually sitting on a local maximum someplace and the AI just provided me a pointer to another alternative hypothesis, based upon my initial input”. So I think exploring the hypothesis space is, and maybe part of it is just from a literature search perspective. Maybe there’s a set of esoteric journal articles someplace.

show more

Share/Embed