Quantcast
Channel: Caltech News tagged with "faculty_profile"
Viewing all 83 articles
Browse latest View live

Why Do We Feel Thirst? An Interview with Yuki Oka

$
0
0
News Writer: 
Jessica Stoller-Conrad
Credit: Lance Hayashida/Caltech Marketing and Communications

To fight dehydration on a hot summer day, you instinctively crave the relief provided by a tall glass of water. But how does your brain sense the need for water, generate the sensation of thirst, and then ultimately turn that signal into a behavioral trigger that leads you to drink water? That's what Yuki Oka, a new assistant professor of biology at Caltech, wants to find out.

Oka's research focuses on the study of how the brain and body work together to maintain a healthy ratio of salt to water as part of a delicate form of biological balance called homeostasis.

Recently, Oka came to Caltech from Columbia University. We spoke with him about his work, his interests outside of the lab, and why he's excited to be joining the faculty at Caltech.

 

Can you tell us a bit more about your research?

The goal of my research is to understand the mechanisms by which the brain and body cooperate to maintain our internal environment's stability, which is called homeostasis. I'm especially focusing on fluid homeostasis, the fundamental mechanism that regulates the balance of water and salt. When water or salt are depleted in the body, the brain generates a signal that causes either a thirst or a salt craving. And that craving then drives animals to either drink water or eat something salty.

I'd like to know how our brain generates such a specific motivation simply by sensing internal state, and then how that motivation—which is really just neural activity in the brain—goes on to control the behavior.

 

Why did you choose to study thirst?

After finishing my Ph.D. in Japan, I came to Columbia University where I worked on salt sensing mechanisms in the mammalian taste system. We found that the peripheral taste system has a key function for salt homeostasis in the body by regulating our salt intake behavior. But of course, the peripheral sensor does not work by itself.  It requires a controller, the brain, which uses information from the sensor. So I decided to move on to explore the function of the brain; the real driver of our behaviors.

I was fascinated by thirst because the behavior it generates is very robust and stereotyped across various species. If an animal feels thirst, the behavioral output is simply to drink water. On the other hand, if the brain triggers salt appetite, then the animal specifically looks for salt—nothing else. These direct causal relations make it an ideal system to study the link between the neural circuit and the behavior.

 

You recently published a paper on this work in the journal Nature. Could you tell us about those findings?

In the paper, we linked specific neural populations in the brain to water drinking behavior. Previous work from other labs suggested that thirst may stem from a part of the brain called the hypothalamus, so we wanted to identify which groups of neurons in the hypothalamus control thirst. Using a technique called optogenetics that can manipulate neural activities with light, we found two distinct populations of neurons that control thirst in two opposite directions. When we activated one of those two populations, it evoked an intense drinking behavior even in fully water-satiated animals. In contrast, activation of a second population drastically suppressed drinking, even in highly water-deprived thirsty animals.  In other words, we could artificially create or erase the desire for drinking water.

Our findings suggest that there is an innate brain circuit that can turn an animal's water-drinking behavior on and off, and that this circuit likely functions as a center for thirst control in the mammalian brain. This work was performed with support from Howard Hughes Medical Institute and National Institutes of Health [for Charles S. Zuker at Columbia University, Oka's former advisor].

 

You use a mouse model to study thirst, but does this work have applications for humans?

There are many fluid homeostasis-associated conditions; one example is dehydration. We cannot specifically say a direct application for humans since our studies are focused on basic research. But if the same mechanisms and circuits exist in mice and humans, our studies will provide important insights into human physiologies and conditions.

 

Where did you grow up—and what started your initial interest in science?

I grew up in Japan, close to Tokyo, but not really in the center of the city. It was a nice combination between the big city and nature. There was a big park close to my house and when I was a child, I went there every day and observed plants and animals. That's pretty much how I spent my childhood. My parents are not scientists—neither of them, actually. It was just my innate interest in nature that made me want to be a scientist.

 

What drew you to Caltech?

I'm really excited about the environment here and the great climate. That's actually not trivial; I think the climate really does affect the people. For example, if you compare Southern California to New York, it's just a totally different character. I came here for a visit last January, and although it was my first time at Caltech I kind of felt a bond. I hadn't even received an offer yet, but I just intuitively thought, "This is probably the place for me."

I'm also looking forward to talking to my colleagues here who use fMRI for human behavioral research. One great advantage about using human subjects in behavioral studies is that they can report back to you about how they feel. There are certainly advantages of using an animal model, like mice. But they cannot report back. We just observe their behavior and say, "They are drinking water, so they must be thirsty." But that is totally different than someone telling you, "I feel thirsty." I believe that combining advantages of animal and human studies should allow us to address important questions about brain functions.

 

Do you have any hobbies?

I play basketball in my spare time, but my major hobby is collecting fossils. I have some trilobites and, actually, I have a complete set of bones from a type of herbivorous dinosaur. It is being shipped from New York right now and I may put it in my new office.


How To Study High-Speed Flows: An Interview With Joanna Austin

$
0
0
News Writer: 
Kimm Fesenmaier
Credit: EAS Office of Communications/Caltech

Joanna Austin (MS '98, PhD'03) does not just go with the flow. She picks it apart and analyzes it. One of the newest faculty members in Caltech's Division of Engineering and Applied Science, Austin studies the mechanics involved in compressible flows, those where gases reach such high speeds that the density of a fluid element changes drastically. These flows come into play in problems ranging from the logistics of a spacecraft's entry into a planet's atmosphere to the hows and whys of volcanic eruptions.

To simulate such high-speed, high-temperature conditions, Austin and her colleagues use pistons and explosives in large test tunnels to compress gases. Austin had her first experience with this kind of facility while an undergraduate at the University of Queensland, in her native Australia, where the Centre for Hypersonics houses the T4 shock tunnel. Later, as a graduate student at Caltech, Austin worked in the Explosion Dynamics Laboratory. While an assistant professor at the University of Illinois at Urbana-Champaign, she built another instrument for producing very-high-speed flows, known as the hypervelocity expansion tube (HET). In August 2014, she joined Caltech's faculty as a professor of aerospace. There, she will work with the T5 reflected shock tunnel, the next generation of the T4. She is having the HET transported to campus as well, which will create a full suite of complementary facilities.

We recently spoke with Austin about these tunnels, her research, and the challenges involved in studying high-speed flows.

 

What made you decide to come back to Caltech?

What it basically comes down to is that it is such an exciting and stimulating place to be, with the faculty, the students, and the facilities that are available. And I've always loved it here, so I was really thrilled to come back.

 

Your specialty is high-speed flows. What is your focus within that area?

My group investigates flows under conditions where the molecular processes in the gas couple with the fluid mechanics, the dynamics of the flow. That covers a range of topics, including hypervelocity flows. These are flows that are associated with objects—whether manmade or naturally occurring—entering a planet's atmosphere. We have a couple of different facilities for recreating and studying these kinds of flows.

That was another thing that was really exciting about coming back to Caltech—GALCIT already had some existing, fantastic facilities like T5. With the instrument that I built and am bringing from Illinois, the HET, we will have a really unique suite of experimental facilities.

 

Can you talk more about the T5 and the HET? What do they do?

Essentially what they do is produce a test gas that realistically simulates the flow over an object as it's entering an atmosphere. So if you're interested, for example, in a martian mission, we can make a model of a particular spacecraft configuration, place it in one of these two facilities, and then accelerate the gas to replicate the conditions that the model would actually experience during atmospheric entry.

Then we can make measurements and use various models to understand what happened under those conditions with regard to quantities such as heat flux, which is obviously critical to the survival of the vehicle. But we have just a one-millisecond or less window in which to make all of the measurements that we need.

 

What other kinds of studies do you conduct?

Another type of experiment we do involves probing the molecules themselves in a flow. So we can nonintrusively determine which molecular species are in the flow and what temperature they are at, and in that way we can inform models of the way those molecules interact.

 

Earlier, you provided the example of a martian mission. What work are you doing in that field?

For some of the larger vehicles that are being discussed for future Mars missions, we need to have much better models for predicting the heat flux that they will experience. For a smaller-scale vehicle, you might be able to get away with using a more protective, and therefore heavier, heat shield than you actually need. But with a larger vehicle, the mass penalty that you would pay for such a safety factor would be prohibitive. So we need to have better predictive models.

We've started working on that. I think our next step will be applying spectroscopic techniques to actually probe the molecules.

 

And back on Earth, what kinds of phenomena are you investigating?

We have some projects looking at bubble dynamics and the processes involved when bubbles or arrays of bubbles collapse. These come into play in various medical procedures where pulses generated by lasers selectively remove tissue but can also damage the surrounding tissue and cells.

We've also been looking at explosive volcanic eruptions. Most recently we've been interested in what happens if you send an explosive jet over different topographies, such as the side of Mount St. Helens. With 3-D printing, it's really fun, we can make physical models of the geometries of the different topographies you want to test and then run the experiment over the actual geometries.

 

Is there a topic within the field that most excites you?

I guess it's the umbrella topic of gas dynamics, and particularly looking at gas dynamics in reacting flows. That's the thing I really love. It's a very challenging, coupled, problem. As the fluid is going through the model that you're studying, you also have to account for the fact that the state of the fluid is changing—the gas is chemically reacting, so it's changing from reactants to products, or it's redistributing its energy states, or both. Understanding how best to model these processes, that's what excites me.

The Planet Finder: A Conversation with Dimitri Mawet

$
0
0
News Writer: 
Douglas Smith
Dimitri Mawet
Associate Professor of Astronomy Dimitri Mawet
Credit: Lance Hayashida/Caltech

Associate Professor of Astronomy Dimitri Mawet has joined Caltech from the Paranal Observatory in Chile, where he was a staff astronomer for the Very Large Telescope. After earning his PhD at the University of Liège, Belgium, in 2006, he was at JPL from 2007 to 2011—first as a NASA postdoctoral scholar and then as a research scientist.

 

Q: What do you do?

A: I study exoplanets, which are planets orbiting other stars. In particular, I'm developing technologies to view exoplanets directly and analyze their atmospheres. We're hunting for small, Earth-like planets where life might exist—in other words, planets that get just the right amount of heat to maintain water in its liquid state—but we're not there yet. For an exoplanet to be imaged right now, it has to be really big and really bright, which means it's very hot.

In order to be seen in the glare of its star, the planet has to be beyond a minimum angular separation called the inner working angle. Separations can also be expressed in astronomical units, or AUs, where one AU is the mean distance between the sun and Earth. Right now we can get down to about two AU—but only for giant planets. For example, we recently imaged Beta Pictoris and HR 8799. We didn't find anything at two AU in either star system, but we found that Beta Pictoris harbors a planet about eight times more massive than Jupiter orbiting at 9 AU. And we see a family of four planets in the five- to seven-Jupiters range that orbit from 14 to 68 AU around HR 8799. For comparison, Saturn is 9.5 AU from the sun, and Neptune is 30 AU.

 

Q: How can we narrow the working angle?

A: You either build an interferometer, which blends the light from two or more telescopes and "nulls out" the star, or you build a coronagraph, which blots out the star's light. Most coronagraphs block the star's image by putting a physical mask in the optical path. The laws of physics say their inner working angles can't be less than the so-called diffraction limit, and most coronagraphs work at three to five times that. However, when I was a grad student, I invented a coronagraph that works at the diffraction limit.

The key is that we don't use a physical mask. Instead, we create an "optical vortex" that expels the star's light from the instrument. Some of our vortex masks are made from liquid-crystal polymers, similar to your smartphone's display, except that the molecules are "frozen" into orientations that force light waves passing through the center of the mask to emerge in different phase states simultaneously. This is not something nature allows, so the light's energy is nulled out, creating a "dark hole."

If we point the telescope so the star's image lands exactly on the vortex, its light will be filtered out, but any light that's not perfectly centered on the vortex—such as light from the planets, or from a dust disk around the star—will be slightly off-axis and will go on through to the detector.

We're also pushing to overcome the enormous contrast ratio between the very bright star and the much dimmer planet. Getting down to the Earth-like regime requires a contrast ratio of 10 billion to 1, which is really huge. The best contrast ratios achieved on ground-based telescopes today are more like 1,000,000 to 1. So we need to pump it up by another factor of 10,000.

Even so, we can do a lot of comparative exoplanetology, studying any and all kinds of planets in as many star systems as we can. The variety of objects around other stars—and within our own solar system—is mind-boggling. We are discovering totally unexpected things.

 

Q: Such as?

A: Twenty years ago, people were surprised to discover hot Jupiters, which are huge, gaseous planets that orbit extremely close to their stars—as close as 0.04 AU, or one-tenth the distance between the sun and Mercury. We have nothing like them in our solar system. They were discovered indirectly, by the wobble they imparted to their star or the dimming of their star's light as the planet passed across the line of sight. But now, with high-contrast imaging, we can actually see—directly—systems of equally massive planets that orbit tens or even hundreds of AU's away from their stars, which is baffling.

Planets form within circumstellar disks of dust and gas, but these disks get very tenuous as you go farther from the star. So how did these planets form? One hypothesis is that they formed where we see them, and thus represent failed attempts to become multiple star systems. Another hypothesis is that they formed close to the star, where the disk is more massive, and eventually expelled one another by gravitational interactions.

We're trying to answer that question by starting at the outskirts of these planetary systems, looking for massive, hot planets in the early stages of formation, and then grind our way into the inner reaches of older planetary systems as we learn to reduce the working angle and deal with ever more daunting contrast ratios. Eventually, we will be able to trace the complete history of planetary formation.

 

Q: How can you figure out the history?

Once we see the planet, once we have its signal in our hands, so to speak, we can do all kinds of very cool measurements. We can measure its position, that's called astrometry; we can measure its brightness, which is photometry; and, if we have enough signal, we can sort the light into its wavelengths and do spectroscopy.

As you repeat the astrometry measurements over time, you resolve the planet's orbit by following its motion around its star. You can work out masses, calculate the system's stability. If you add the time axis to spectrophotometry, you can begin to track atmospheric features and measure the planet's rotation, which is even more amazing.

Soon we'll be able to do what we call Doppler imaging, which will allow us to actually map the surface of the planet. We'll be able to resolve planetary weather phenomena. That's already been done for brown dwarfs, which are easier to observe than exoplanets. The next generation of adaptive optics on really big telescopes like the Thirty Meter Telescope should get us down to planetary-mass objects.

That's why I'm so excited about high-contrast imaging, even though it's so very, very hard to do. Most of what we know about exoplanets has been inferred. Direct imaging will tell us so much more about exoplanets—what they are made out of and how they form, evolve, and interact with their surroundings.

 

Q: Growing up, did you always want to be an astronomer?

A: No. I wanted to get into space sciences—rockets, satellite testing, things like that. I grew up in Belgium and studied engineering at the University of Liège, which runs the European Space Agency's biggest testing facility, the Space Center of Liège. I had planned to do my master's thesis there, but there were no openings the year I got my diploma.

I was not considering a thesis in astronomy, but I nevertheless went back to campus, to the astrophysics department. I knew some of the professors because I had taken courses with them. One of them, Jean Surdej, suggested that I work on a concept called the Four-Quadrant Phase-Mask (FQPM) coronagraph, which had been invented by French astronomer Daniel Rouan. I had been a bit hopeless, thinking I would not find a project I would like, but Surdej changed my life that day.

The FQPM was one of the first coronagraphs designed for very-small-working-angle imaging of extrasolar planets. These devices performed well in the lab, but had not yet been adapted for use on telescopes. Jean, and later on Daniel, asked me to help build two FQPMs—one for the "planet finder" on the European Southern Observatory's Very Large Telescope, or VLT, in Chile; and one for the Mid-Infrared Instrument that will fly on the James Webb Space Telescope, which is being built to replace the Hubble Space Telescope.

I spent many hours in Liège's Hololab, their holographic laboratory, playing with photoresists and lasers. It really forged my sense of what the technology could do. And along the way, I came up with the idea for the optical vortex.

Then I went to JPL as a NASA postdoc with Eugene Serabyn. I still spent my time in the lab, but now I was testing things in the High Contrast Imaging Testbed, which is the ultimate facility anywhere in the world for testing coronagraphs. It has a vacuum tank, six feet in diameter and eight feet long, and inside the tank is an optical table with a state-of-the-art deformable mirror. I got a few bruises crawling around in the tank setting up the vortex masks and installing and aligning the optics.

The first vortex coronagraph actually used on the night sky was the one we installed on the 200-inch Hale Telescope down at Palomar Observatory. The Hale's adaptive optics enabled us to image the planets around HR 8799, as well as brown dwarfs, circumstellar disks, and binary star systems. That was a fantastic and fun learning experience.

So I developed my physics and manufacturing intuition in Liège, my experimental and observational skills at JPL, and then I went to Paranal where I actually applied my research. I spent about 400 nights observing at the VLT; I installed two new vortex coronagraphs with my Liège collaborators; and I became the instrument scientist for SPHERE, to which I had contributed 10 years before when it was called the planet finder. And I learned how a major observatory operates—the ins and outs of scheduling, and all the vital jobs that are performed by huge teams of engineers. They far outnumber the astronomers, and nothing would function without them.

And now I am super excited to be here. Caltech and JPL have so many divisions and departments and satellites—like Caltech's Division of Physics, Mathematics and Astronomy and JPL's Science Division, both my new professional homes, but also Caltech's Division of Geology and Planetary Sciences, the NASA Exoplanet Science Institute, the Infrared Processing and Analysis Center, etc. We are well-connected to the University of California. There are so many bridges to build between all these places, and synergies to benefit from. This is really a central place for innovation. I think, for me, that this is definitely the center of the world.

Getting the Lead Out

$
0
0
News Writer: 
Douglas Smith
Clair Patterson and distillation apparatus.
In his sleuthing for trace amounts of lead, Caltech geochemist Clair Patterson redistills a reagent in this 1957 photograph. He didn’t trust the purity of commercial chemicals.
Credit: Caltech E&S Magazine, Volume 60, Number 1, 1997

Caltech geochemist Clair Patterson (1922–1995) helped galvanize the environmental movement 50 years ago when he announced that highly toxic lead could be found essentially everywhere on Earth, including in our own bodies—and that very little of it was due to natural causes.

In a paper published in the September 1965 issue of Archives of Environmental Health, Patterson challenged the prevailing belief that industrial and natural sources contributed roughly equal amounts of ingestible lead, and that the aggregate level we absorbed was safe. Instead, he wrote, "A new approach to this matter suggests that the average resident of the United States is being subjected to severe chronic lead insult." He estimated that our "lead burden" was as much as 100 times that of our preindustrial ancestors—often to just below the threshold of acute toxicity.

Lead poisoning was known to the ancients. Vitruvius, designer of aqueducts for Julius Caesar, wrote in Book VIII of De Architectura that "water is much more wholesome from earthenware pipes than from lead pipes . . . [water] seems to be made injurious by lead." Lead accumulates in the body, where it can have profound effects on the central nervous system. Children exposed to high lead levels often acquire permanent learning disabilities and behavioral disorders.

When Patterson arrived at Caltech as a research fellow in geochemistry in 1952, he was looking not to save the world but to figure out how old it was. Doing so required him to measure the precise amounts of various isotopes of uranium and lead. (Isotopes are atoms of the same element that contain different numbers of neutrons in their nuclei.) Uranium-238 decays very, very slowly into lead-206, while uranium-235 decays less slowly into lead-207. Both rates are well known, so measuring the ratios of lead atoms to uranium ones shows how much uranium has disappeared and allows the sample's age to be calculated.

Patterson presumed that the inner solar system's rocky planets and meteorites had all coalesced at the same time, and that the meteorites had survived essentially unchanged ever since. Using an instrument called a mass spectrometer and working in a clean room he had designed and built himself, Patterson counted the individual lead atoms in a meteorite sample recovered from Canyon Diablo near Meteor Crater, Arizona. In a landmark paper published in 1956, he established Earth's age as 4.55 billion years.

However, there are four common isotopes of lead, and Patterson had to take them all into account in his calculations. He had announced his findings at a conference in 1955, and he had continued to refine his results as the paper worked its way through the review process. But there he hit a snag—his analytical skills had become so finely honed that he was finding lead everywhere. He needed to know the source of this contamination in order to eliminate it, and he took it on himself to find out.

Patterson's 1965 Environmental Health paper summarized that work. With M. Tatsumoto of the U.S. Geological Survey, he found that the ocean off of southern California was lead-laden at the surface but that the contamination disappeared rapidly with depth. They concluded that the likely culprit was tetraethyl lead, a widespread gasoline additive that emerged from the tailpipe of automobiles as very fine lead particles. Patterson and research fellow T. J. Chow crisscrossed the Pacific aboard research vessels run by the Scripps Institution of Oceanography at UC San Diego and found the same profile of lead levels versus depth. Then, in the winter of 1962–63, Patterson and Tatsumoto collected snow at an altitude of 7,000 feet on Mount Lassen in northern California. The lead contamination there was 10 to 100 times worse than at sea. Patterson concluded that it had fallen from the skies. Its isotopic fingerprint was a perfect match for air samples from Los Angeles—located 500 miles to the south. It also matched gasoline samples obtained by Chow in San Diego. Furthermore, the isotope fingerprint was different from that of lead found in prehistoric sediments off the California coast.

"The atmosphere of the northern hemisphere contains about 1,000 times more than natural amounts of lead," Patterson wrote, and he called for the "elimination of some of the most serious sources of lead pollution such as lead alkyls [i.e., tetraethyl lead], insecticides, food can solder, water service pipes, kitchenware glazes, and paints; and a reevaluation by persons in positions of responsibility in the field of public health of their role in the matter."

Patterson's paper was his first shot in the war against lead pollution, bureaucratic inertia, and big business that he would wage for the rest of his life. He won: the Clean Air Act of 1970 authorized the development of national air-quality standards, including emission controls on cars. In 1976, the Environmental Protection Agency reported that more than 100,000 tons of lead went into gasoline every month; by 1980 that figure would be less than 50,000 tons, and the concentration of lead in the average American's blood would drop by nearly 50 percent as well. The Consumer Product Safety Commission would ban lead-based indoor house paints in 1977 (flakes containing brightly colored lead pigments often found their way into children's mouths). And in 1986, the EPA prohibited tetraethyl lead in gasoline.

Understanding Olfaction: An Interview with Elizabeth Hong

$
0
0
News Writer: 
Jessica Stoller-Conrad
Elizabeth Hong, Assistant Professor of Neuroscience, works on the two-photon laser scanning microscope in her lab.
Credit: Lance Hayashida/Caltech

You walk by a bakery, smell the scent of fresh cookies, and are immediately reminded of baking with your grandmother as a child. The seemingly simple act of learning to associate a smell with a good or bad outcome is actually quite a complicated behavior—one that can begin as a single synapse, or junction, where a signal is passed between two neurons in the brain.

Assistant Professor of Neuroscience Betty Hong is interested in how animals sense cues in their environment, process that information in the brain, and then use that information to guide behaviors. To study the processing of information from synapse to behavior, her work focuses on olfaction—or chemical sensing via smell—in fruit flies.

Hong, who received her bachelor's degree from Caltech in 2002 and her doctorate from Harvard in 2009, came from a postdoctoral position at Harvard Medical School to join the Caltech faculty in June. We spoke with her recently about her work, her life outside the laboratory, and why she is looking forward to being back at Caltech.

 

How did you initially become interested in your field?

It's rather circuitous. I was initially drawn to neuroscience because I was interested in disease. I had family who passed away from Alzheimer's disease, and it's clear that with the current demographic of our country, diseases associated with aging—like Alzheimer's—are going to have a large impact on society in the next 20 to 30 years. Working at the Children's Hospital Boston in graduate school, I also became increasingly interested in understanding the rise of neurodevelopmental disorders like autism.

I really wanted to understand the mechanistic basis for neurological disease. And then it became clear to me that part of the problem of trying to understand neurological disorders was that we really had no idea how the brain is supposed to work. If you were a mechanic who didn't know how cars work, how could you fix a broken car? That led me to study increasingly more basic mechanisms of how the brain functions.  

 

Why did you decide to focus your research on olfaction?

Although we humans have evolved to move away from olfaction—humans and primates are very visual—the whole rest of the animal kingdom relies on olfaction heavily for all of its daily survival and functions. Even the lowliest microbe relies on chemical sensing to navigate its way through the environment. We study olfaction in an invertebrate model—the fruit fly Drosophila. We do that for a couple of reasons. One is that it has a very small brain, and so its circuits are very compact, and that small size and numerical simplicity lets us get a global overview of what's happening—a view that you could never get if you're looking at a big circuit, like a mouse brain or a human brain.

The other reason is that there are versatile genetic tools and new technologies that have allowed us to make high-resolution electrical and optical recordings of neural activity in the brains of fruit flies. That very significant technical hurdle had to be crossed in order to make it a worthwhile experimental model. With electrophysiological access to the brain, and genetic tools that allow you to manipulate the circuits, you can watch brain activity as it's happening and ask what happens to neural activity when you tweak the properties of the system in specific ways. And the fly also has a robust and flexible set of behaviors that you can relate to all of this. 

 

What are some of the behaviors that you are interested in studying?

We're very interested in understanding how flies can associate an odor with a pleasant or unpleasant outcome. So, in the same way that you might associate wonderful baking smells with something from your childhood, flies can learn to arbitrarily associate odors with different outcomes. And to know "when I smell this odor, I should run away," or "based on what happened to me the last time I smelled this odor, this might be an indicator of food"—that's actually a fairly sophisticated behavior that is a basic building block for more complex higher-order cognitive tasks that emerge in vertebrates.

There are many animals that are inflexibly wired. In other words, they smell something, and through evolution, their circuits have evolved to tell them to move toward it or go away from it. Even if they are in an unusual environment, they can't flexibly alter that behavior. The ability to flexibly adapt our behavior to new and unfamiliar environments was a key transition in the evolution of the nervous system.

 

You are also a Caltech alum. What drew you back as a faculty member?

Yes, it seems like such a long time ago, but I was an undergraduate here—a biology major in Page House—from 1998 to 2002. I was also a SURF student with [Professor of Biology] Bruce Hay and later with David Baltimore [president emeritus and Robert Andrews Millikan Professor of Biology]. It's kind of wild to have as your colleagues people who were your mentors a decade ago, but I think the main reason I chose Caltech was the community of scholars here—on the level of faculty, undergraduate students, graduate students, and postdocs—that I will be able to interact with. In the end, you mainly just want to be with smart, motivated people who want to use science to make a difference in the world. And I think that encapsulates what Caltech does.

 

Do you have any interests or hobbies that are outside of the lab?

I used to play horn in the wind ensemble and orchestra, including the time when I was here as an undergraduate. But these days, any time that I'm not in the office, I'm with my two young kids. Right now, we're really excited about exploring all the fun and exciting things to do outdoors in Southern California. We've done a lot of hiking and exploring the natural beauty here. The kids have gotten into fishing lately, so our latest thing has been scoping out the best places to fish. I would love to hear from members of the community what their favorite spots are!

When Harry Met Arnold

$
0
0
A Milestone in Chemistry
News Writer: 
Douglas Smith
Caltech chemist Harry Gray
Harry B. Gray, Caltech's Arnold O. Beckman Professor of Chemistry and founding director of the Beckman Institute.
Credit: Lance Hayashida/Caltech

On November 12 and 13, the Beckman Institute at Caltech hosted a symposium on "The Shared Legacy of Arnold Beckman and Harry Gray." The two began a close working relationship in the late 1960s, when Gray arrived at Caltech. In this interview, Gray provides some background.

How did you come to Caltech?

I grew up in southern Kentucky. I got my BS in chemistry in 1957, and my professors told me to go to grad school at Northwestern University in Evanston, Illinois, to continue my studies in synthetic organic chemistry. They didn't give me a choice. Western Kentucky College had physical chemistry, analytical chemistry, organic chemistry, and that was it.

When I got to Northwestern I met Fred Basolo, who became my mentor. He did inorganic chemistry, which I was very surprised to discover even existed as a research field. I was so excited by his work, which was studying the mechanisms of inorganic reactions, that I decided to switch fields and do what he did. I got my PhD in 1960 from work on the syntheses and reaction mechanisms of platinum, rhodium, palladium, and nickel complexes. A complex has a metal atom sitting in the middle of as many as six ions or molecules called ligands. The metal has empty orbitals that it wants to fill with paired-up electrons, and the ligands have electron pairs they aren't using, so the metal and its ligands form stable bonds.

I had gotten into chemistry in the first place because I'd always been interested in colors. Even when I was a little kid, colors fascinated me. I really wanted to understand them, and many complexes have brilliant, beautiful colors. At Northwestern I heard about crystal-field theory, which was the first attempt to explain how metal complexes got their colors. All the crystal-field theory's big shots were in Copenhagen, so I decided to go there as a postdoc. Which I did.

I soon found out that crystal-field theory didn't go far enough. It only explained the colors of a limited set of metal ions in solution, and it couldn't explain charge transfers and a lot of other things. All the atoms were treated as point charges, with no provision for the bonds between the metal and the ligands. There weren't any bonds. So I helped develop a new theory, called ligand-field theory, which put the bonds back in the complexes. Carl Ballhausen, a professor at the University of Copenhagen, and I wrote a paper on a "metal-oxo" complex in which an oxygen atom was triple-bonded to a vanadium ion. The triple bond in our theory was required to account for the blue color of the vanadium-oxo complex. We also could explain charge transfers in other oxo complexes. Bonds were back in metal complexes!

Metal-oxo bonds are very important in biology. They are crucial in a lot of reactions, such as the oxygen-producing side of photosynthesis; the metabolism of drugs by cytochrome P-450, which often leads to toxic interactions with other drugs; and respiration. When we breathe in O2, our respiratory system splits the O=O bond, forming a metal-oxo complex as a reactive intermediate on the way to the product, which is water.

My work on bonding in metal oxo complexes got me a job as an assistant professor at Columbia University in 1961. By '65 I was a full professor and getting offers from many places, including Caltech. I loved Columbia, and I would have stayed there, but the chemistry department was very small. I knew it would be hard to build inorganic chemistry in a small department that concentrated on organic and physical chemistry.

There weren't any inorganic chemists at Caltech, either, but division chair Jack Roberts encouraged me to build the field up to five or six faculty members. I came to Caltech in 1966, and we now have a very strong inorganic chemistry group.

When I got here, I started work in two new areas at the interface of inorganic chemistry and biology. I'm best known for my work showing how electrons flow through proteins in respiration and photosynthesis. I won the Wolf Prize and the Welch Prize and the National Medal of Science for this work.

I also got into inorganic photochemistry—solar-energy research. That work started well before the first energy crisis in 1973, and continued until oil became cheap again in the early 1980s and solar-energy research was no longer supported. In the late '90s, I restarted the work. Now I'm leading an NSF Center for Chemical Innovation in Solar Fuels, which has an outreach activity I proudly call the Solar Army.

And how's that going?

The Solar Army keeps growing. We now have at least 60 brigades at high schools across the U.S., and 10 more abroad. I'd say that about 1,000 students have been through the program since 2008. We're getting young scientists involved in research that could have a profound effect on the world they're going to inherit. They're helping us look for light absorbers and catalysts to turn water into hydrogen fuel, using nothing but sunlight. The solar materials need to be sturdy metal oxides that are abundant and dirt cheap. But there are many metals in the periodic table. When you start combining them in twos and threes in varying amounts, there are literally millions of possibilities to be tested. We already have found several very good water oxidation and reduction catalysts, and since the National Science Foundation has just renewed our CCI Solar Fuels grant, we expect to make great progress in the coming years in understanding how they work.

Let's shift gears and talk about the Beckman Institute. How did you first meet Arnold Beckman [PhD '28, inventor of the pH meter, founder of Beckman Instruments, and a Life Trustee of Caltech]?

I gave a talk back in 1967, probably on Alumni Day. Arnold was the chair of Caltech's Board of Trustees at the time, and he and his wife, Mabel, were seated in the second row. When the talk was over, they came down and introduced themselves. Mabel said—and I remember this very well—she said, "Arnold, I didn't understand much of what this young man said, but I really liked the way he said it." Arnold gave me the thumbs up, and that started our relationship.

When I became chairman of the Division of Chemistry and Chemical Engineering in 1978, I asked him to be on my advisory committee. I didn't ask him for money, but I asked him for advice, and we became quite close. He said he wanted to do something for us. That led to his gift for the Arnold and Mabel Beckman Laboratory of Chemical Synthesis, as well as a gift for instrumentation.

He liked it that we raised money to match his instrument gift. He told me that he wanted to do something bigger, so we started thinking about building the Beckman Institute. [Caltech President] Murph Goldberger and I would go down to Orange County about every week with a new plan. He rejected the first four or five until we came up with the idea of developing technology to support chemistry and biology—methods and instruments for fundamental research—and creating resource centers to house them.

Once we agreed on what the building should house, we started planning the building itself. But when we showed Arnold our design, which was four stories plus a basement, he said, "That's not big enough. You need another floor for growth." So we added a subbasement that was quickly occupied by a resource center for magnetic-resonance imaging and optical imaging that has been heavily used by biologists, chemists, and other investigators.

The Beckman Institute has done a lot over the last 25 years. But it develops technology for general research use, so it doesn't often make the headlines itself. Are you OK with that?

Many advances in science and technology have been made in the Beckman Institute over the last 25 years. The methods and instruments that have been developed in BI resource centers have made enormous impacts at the frontiers of chemistry and biology. Solar-fuels science and human biology are just two examples of areas where work in the Beckman Institute has made a big difference. And there are many more. Am I proud? You bet I am!

The Interface of Earth and Atmosphere: An Interview with Christian Frankenberg

$
0
0
News Writer: 
Lori Dajose
Christian Frankenberg, Associate Professor of Environmental Science and Engineering (Caltech); Jet Propulsion Laboratory Research Scientist (JPL)
Credit: Dan Goods/JPL-Caltech

Plants are an important mediator between the earth and the atmosphere. In order to grow, they breathe in carbon dioxide—one of the major greenhouse gases. Caltech associate professor Christian Frankenberg is interested in this relationship and how the biosphere reacts to climate change.

A native of Germany, Frankenberg earned a Diploma degree at the University of Bayreuth and a PhD at Ruprecht-Karls-University in Heidelberg. He spent the past five years as a research scientist at JPL and joined the Caltech faculty this fall. We recently spoke with Frankenberg about remote sensing, the biosphere, and life in Pasadena.

What do you do?

I use remote sensing tools—based on spectrometers in space and the air—to gain a deeper understanding of the carbon cycle. This means making measurements of atmospheric greenhouse gases like carbon dioxide and methane as well as measuring plant activity by sensing solar-induced chlorophyll fluorescence from space. Chlorophyll fluorescence happens when plants take in sunlight. A tiny fraction of that sunlight gets emitted at a slightly longer wavelength. We can see this glow from space, and it is a good proxy of the photosynthetic uptake of CO2 by plants.

One of my goals is to combine the atmospheric measurements and the fluorescence measurements to gain a deeper understanding of when, where, and why the carbon cycle changes. I work with the Orbiting Carbon Observatory 2 (OCO-2) at JPL, and also with a Japanese project called the Greenhouse Gases Observing Satellite (GOSAT).

Why is it important to understand the carbon cycle?

Many people are familiar with the famous Keeling Curve—a ground-based measurement of atmospheric carbon dioxide that has been ongoing since 1958. This curve shows a continual increase in CO2 abundances from year to year, but it also shows a strong seasonal cycle—abundances go up in winter and down in summer. This is because in the Northern Hemisphere summer, plants are growing and removing CO2 from the atmosphere; in winter, plants are releasing CO2.

If we count all the barrels of oil and everything else that we burn to release CO2, only about one-half of it remains in the atmosphere. One-fourth goes into the oceans, and the rest is taken up by vegetation. The biosphere is doing us a big favor in taking up a lot of what we're emitting, but we don't know exactly where on Earth that vegetation is absorbing the most or if will it persist in the future.

What can we do to improve our relationship with the biosphere?

There's always talk about reducing CO2 emissions, which is great, but often actions are pound-foolish and penny-wise. I think energy efficiency is a big factor in improving our relationship with the biosphere. This means probably not having single-pane windows, and it definitely means not running the air conditioning and the heater at the same time, which I've seen (too often)! I do see a great opportunity for clean solar power in California—there's so much sun!

How did you get interested in biogeosciences?

At school I liked natural sciences, like math and chemistry, but I didn't want to focus on just one of them. During my undergraduate education, I studied geoecology, which gives a broad background of all the natural sciences. But I found out pretty quickly that I liked the more quantitative stuff, so I focused on the physics, math, and chemistry aspects, and did my PhD in environmental physics. That's where I started working on remote sensing. I really liked it; the combination of working with the biosphere but also doing more technical work suited me. Now it seems I'm making a full turn again with my plant-based research. It's like going back to my geoecology roots.

What brought you to Caltech?

I've always been interested in Caltech, but after a postdoc in the Netherlands, I got a job offer from JPL—five and a half years ago. I knew that in the long term, I wanted to be in academia doing more basic research and having academic freedom.

How does your job as a professor differ from your previous appointment as a research scientist?

I still retain the title of research scientist at JPL, and I spend one day a week there. For me, it's an ideal situation to be at Caltech but still have the relationship with JPL, where so many things are happening in my field.

But now that I am on the Caltech faculty, I'll be expanding from pure remote sensing to ground-based and laboratory measurements of fluorescence and carbon exchange. We are studying the part of plants that are more relevant for the global carbon cycle, connecting the leaf scale to the global scale. Additionally, I will start teaching courses in the next academic year, which will probably be the biggest change.

What do you like about being in Southern California?

I like the mountains a lot. Pasadena is a nice combination of having a small-town feeling next to the foothills but also having a big city nearby if you want it. It's a sweet spot. What I miss most from Europe is the ability to just bike everywhere you need to go. There is no way to get around without a car here in the L.A. area.

What do you do outside of work?

I try to let the weekend be a weekend and not let it be too busy. I like getting outdoors, hiking and running, playing some soccer or squash. And, of course, spending time with my family and son is also a full-time sort of job.

Microscopic Materials: An Interview with Marco Bernardi

$
0
0
News Writer: 
Lori Dajose
Credit: Lance Hayashida/Caltech

All materials, including the screen on which you are currently reading this text, are composed of a tiny universe of particles. These particles are not only the physical ones, like electrons and atomic nuclei, but also excited states (or so-called quasiparticles) that constantly collide and bounce, gaining and losing energy. Marco Bernardi, a newly appointed assistant professor of applied physics and materials science in the Division of Engineering and Applied Science, is fascinated by these interactions and how they give rise to the world around us. We spoke with Bernardi about his research on energy in materials and also about his new life in the California sun.

You study materials on a tiny, atomic scale. What does that look like?

At the microscopic scale, all materials are made up of numerous interacting particles, trading energy with one another through various collisions that we call scattering processes. For example, if you excite a material with light, electrons inside will undergo scattering processes to release this excess energy. They can emit light as a photon, emit vibrations as a phonon, or trade energy with other electrons. Surprisingly, these processes occur even in the dark, as all materials maintain an equilibrium with the environment by exchanging energy. Materials hide an intangible universe.

What are you trying to discover about these excited states?

We study the collision processes between these excited states, both to understand the fundamental science and because they are essential for applications. These processes take place on a femtosecond timescale—a femtosecond is a millionth of a billionth of a second—so they are very challenging to study experimentally. One thing we examine is how long it takes for a material to go back to its normal state of equilibrium after it has been excited. For example: if you excite a piece of gallium arsenide by shining light on it, then it will reemit light as it returns back to its equilibrium state. This emission fades out in time. We want to characterize the timescale for this emission decay, which is called the photoluminescence lifetime. Other examples are the scattering of an electron by a crystalline defect in a material, or the time for the spin of an electron to reorient. If we can understand the timescale for the interactions among electrons, phonons, light, defects, spin, and other excited states, we can predict how materials transport electricity and heat, emit and absorb light, and convert energy into different forms. Applications in electronics, optoelectronics, ultrafast science, and renewable energy abound.

In some cases, the questions we ask have already been examined experimentally. Experimentally, it has been determined that an excited electron in silicon loses energy on a femtosecond timescale, and the conductivity of a simple material like gold has been found. But my group aims to look at materials theoretically. We use the atomic structure of materials—the way their atoms are positioned—as the only input, and solve the equations of quantum mechanics in a computer, without any information from experiment. With this approach, we can understand microscopic details out of reach for experiments and can investigate materials that have not yet been fabricated, besides being able to obtain known experimental results. Some problems, like calculating the conductivity of gold, may sound trivial—but nobody has ever computationally calculated the correct conductivity of gold without any information from experiment, and in particular without knowledge of the timescale for electron scattering. There are also a lot of new experiments studying materials at extremely short timescales, some of them requiring multimillion-dollar lasers, but few theories that can explain them. We are working on computational tools that can understand and microscopically interpret both traditional and less traditional experimental scenarios.

We have a fair understanding of most of the different types of scattering processes, and we have ambitious plans to combine all of these computational approaches for different microscopic phenomena into one big code that can calculate everything that's going on in an excited material. Employing massively parallel algorithms and running them on our computer cluster at Caltech or at national supercomputing facilities, this code would open new avenues for our research.

What are some applications of this work?

In any application where current or light is involved, scattering processes between electrons or other excited states control the behavior of the device. This includes solar panels, circuits, transistors, displays, and other electrically conductive materials. Understanding the timescale and the microscopic details of these scattering processes could allow one to create more efficient solar energy conversion devices, light emitters, and circuits, among other applications; or even "just" understand the behavior of matter at the shortest possible timescale.

What excites you most about being at Caltech?

I'm most excited about the emphasis on fundamental science here. People can be really tempted by "flashy" science or experiments on hot topics. But to compute what I'm trying to look at, we have to first build our understanding on simple experiments and materials—boring things—before we are able to tackle materials at the frontier of condensed matter research. Nobody really wants to do basic measurements on pieces of silicon or gold. But fundamentally we don't know how to compute in detail basic excitations in materials. Caltech supports this kind of science. And no matter what you're working on, you can talk to somebody who will give you some unique perspective or insight.

What is your background? How did you get interested in this field?

I grew up in Italy. In high school, I really enjoyed math and physics. I read an article about carbon nanotubes and nanotechnology, and during my undergraduate education in Italy I became very interested in the physics of materials. Carbon nanotubes can be either metallic or semiconducting—a material where you can control how much current can flow through—so you should be able to create all kinds of parts of a device just made out of these tubes. During my PhD work at MIT, my advisor and I predicted that it would be possible to create a solar cell entirely out of carbon nanotubes. We worked together with a colleague who synthesized the device from our design, and now we have the world record for making a solar cell entirely out of carbon. In the two years that I spent at Berkeley, I discovered that I wanted to understand the dynamics of how particles and excited states exchange energy in materials. Now I'm deeply settled and focused on this problem.

What's your favorite thing about being in Southern California?

It's always sunny. I love swimming, and the outdoor swimming pools are great—at MIT, it was generally too cold to go swim outside. So I'm taking full advantage of how warm it is here.

What do you like to do outside of work?

I love traveling and planning trips. I like to pick a country, go online and find all kinds of options for routes and itineraries, and go explore for two weeks. If I weren't doing science I would probably be traveling and learning languages and talking to people. I've been to Japan and Chile recently, and I would really like to see more of Asia but it's getting more complicated—my wife and I are expecting a baby! My maps tell me that I currently have seen 20 percent of the whole world—on land, that is. My ambition is to get to 80 percent one day.


Hunting for Ephemeral Cosmic Flashes: A Conversation with Mansi Kasliwal

$
0
0
News Writer: 
Kimm Fesenmaier
Mansi Kasliwal, assistant professor of astronomy, holding one of the 16 large-format detectors that will be part of the Zwicky Transient Facility camera mosaic.
Credit: Lance Hayashida/Caltech

Mansi Kasliwal (PhD '11), a new assistant professor of astronomy, searches the night sky for astrophysical transients—flashes of light that appear when stars become a million to a billion times as bright as our sun and then quickly fade away. She and her colleagues have developed robotic surveys to help detect these transient events, and she has built a global network of collaborators and telescopes aimed at capturing details of the flashes at all wavelengths.

Kasliwal grew up in Indore, India, and came to the United States to study at the age of 15. She earned her BS at Cornell University and then came to Caltech to complete her doctoral work in astronomy. She completed a postdoctoral fellowship at the Carnegie Observatories in Pasadena before returning to Caltech as a faculty member in September.

We sat down with Kasliwal to discuss her passion for discovering and studying these cosmic transients as well as her recent efforts to follow up on LIGO's detections of gravitational waves.

What do you actually see when you discover a cosmic transient?

You see a flash of light on the screen and then you see it disappear—often it's gone in a few days or even a few hours. In that short time, you try to get at the chemistry of the event. You try to take the light from the flash and disperse it; once you get a spectrum, you can tell what sorts of elements it's actually made of.

So, I search for these cosmic transients, try to understand what they are all about, and look for new types of them.

What types of events produce these flashes?

The most common varieties of these flashes are novae and supernovae. Novae are produced by nuclear explosions on stellar remnants called white dwarfs, while core-collapse supernovae are related to the deaths of massive stars. Novae are about a million times the brightness of the sun, and supernovae are a billion times as bright. For a long time we didn't know of anything in between, but today we know of many classes of transients with luminosities between novae and supernovae that involve mergers between these crazy objects. That's where a lot of the most interesting stellar physics happens—when something like a white dwarf smashes into a neutron star or a neutron star smashes into a black hole.

These are extreme events, and it turns out that a lot of the chemical elements that we see around us are synthesized in these explosions. For example, when I was doing my PhD thesis here, I found a rare class of events that generates about half of the calcium in the universe. For decades, people had wondered where all the calcium was made because there was much more of it around than supernovae alone could synthesize. We found this group of very rare explosions. We call them calcium-rich gap transients because they appear to be the mines in the universe where calcium is made.

Are they exploding stars?

We actually don't know. Our best guess is that they are some sort of white dwarf–neutron star merger. We now have a sample of about eight of these events and have been able to quantify the calcium made in each, showing that even though these events are rare, each one produces so much calcium that, as a class, they can account for the missing calcium.

On what does your research currently focus?

Right now I'm looking for the cosmic mines of the heavy elements. If you look at the periodic table, about half of the elements heavier than iron—things like gold, platinum, and uranium—are produced by something called r-process nucleosynthesis. We know these elements are produced by this process, but astronomers still don't know where this process takes place. We've never seen it in action. None of the explosions that we've found so far has been extreme enough to actually synthesize enough heavy elements.

What types of events might produce these heavy elements?

Theoretically, we expect that the most extreme events involve a neutron star merging with a black hole or with another neutron star—because neutron stars and black holes are much denser than white dwarfs, for example. But these explosions are extremely rare. They happen maybe once per 10,000 years per galaxy. By comparison, novae are easy to find because there are about 20 of those per year per galaxy. Supernovae are harder, but they still happen about once per century per galaxy.

To look for these rarer and more exotic events, you need the next generation of surveys and telescopes. Your response needs to be quick. The flash of light happens very rarely, it lasts for a very short time, and it's dim. So you have the worst of all worlds when you're trying to find them.  

To overcome this challenge, we are working in close collaboration with Advanced LIGO (the Laser Interferometer Gravitational-Wave Observatory with enhanced detectors to try to detect ripples in the fabric of space-time caused by such extreme events). The idea is that Advanced LIGO will "hear" the gravitational sound waves, and our surveys at Palomar Observatory, currently the intermediate Palomar Transient Factory (PTF), and eventually the Zwicky Transient Facility (ZTF), will see the light from the binary neutron-star merger.

Were you involved in efforts to follow up on Advanced LIGO's recent detection of gravitational waves? What did you see?

I am leading the Caltech effort to look for electromagnetic counterparts to gravitational waves. PTF responded automatically and promptly to the gravitational wave alerts from LIGO and imaged hundreds of square degrees of the localization that was accessible from Palomar Observatory. Within minutes, we reduced our data, and within hours we orchestrated a global follow-up campaign for our most promising candidates—the brightest flashes that could have possibly correlated with the LIGO detection. We obtained spectroscopic follow-up of our candidates from the Keck and Gemini observatories, radio follow-up from the Very Large Array, and X-ray follow-up from the Swift satellite. None of our candidates was related to the gravitational wave trigger, which is what you would expect for a merger of two black holes.

Finding the electromagnetic counterpart to a merger between two neutron stars or a neutron star and a black hole could identify the cosmic mines of heavy elements such as gold and platinum. It is very exciting that this much-awaited gold rush has actually begun!

Can you talk more about PTF and ZTF and also address how you first became interested in this field?

When I came to grad school and took my first course, the known explosions were of two types: novae and supernovae. I thought, "Nature's more creative than that."

For my PhD thesis, we came up with a plan-A, a plan-B, and a plan-C for how to search for events in between, and the Palomar Transient Factory (PTF) was plan-D.

For PTF, we roboticized a couple of telescopes at Palomar Observatory and imaged huge swaths of sky over and over again, looking for things that changed. Because we were imaging such a large area at such a rapid rate, we actually began finding these very rare flashes of light.

Now we're working hard on ZTF, which should come online in 2017. It is an order of magnitude more sensitive than PTF and hence poised to uncover rarer events. Instead of a seven-square-degree camera, we have a 47-square-degree camera; instead of taking 40 seconds to read out the camera, it will take less than 15 seconds.

Are you working on other projects?

I am leading a couple of projects. The first is a project called SPIRITS, which stands for the SPitzer InfraRed Intensive Transients Survey. We are looking for infrared transients. Although there are many surveys at optical wavelengths, the infrared is completely pristine. It's like going off fishing in new waters.

We use the Spitzer Space Telescope and a bunch of ground-based observatories to take images in the infrared of 242 nearby galaxies on different timescales. Most of what I'm finding so far seems to be mergers of individual massive stars and the births of binaries, which you can only see in the infrared because they form in a red cloud of gas and dust.

This is a Spitzer Exploration Science Program, which means we've been granted more than 1,300 hours of time on the space telescope to do this in a very big way over three years.

And the other project?

I'm also the principal investigator on the GROWTH (Global Relay of Observatories Watching Transients Happen) project that was recently funded by the National Science Foundation's Partnerships in International Research and Education program. Our network includes six U.S. universities and six foreign countries spanning the globe to observe transients before they fade away, beating sunrise.

In building this network, we went both for telescopes that would make sense for the network and for people who would enjoy this type of science. It is not for everyone. It's nerve-wracking—you're doing work at 2 a.m., 3 a.m., and if you drop the ball, it's a pretty big deal. We tried to pick coinvestigators who are sufficiently excited about the science so that when they wake up, they aren't in a grumpy mood.

Outside of your research, are you passionate about any other activities?

There is a wonderful organization called Asha, which means hope, which runs schools for underprivileged children in India. To me education is the solution to many of the problems in India. I've been helping Asha with fundraising and setting up these schools. When I go back home, I try to visit the Asha schools. When you meet the children and see that they are actually getting an education and have dreams … it feels good. It's small, but it matters. 

New Director of the Linde Institute: A Conversation with Jaksa Cvitanic

$
0
0
News Writer: 
Kimm Fesenmaier
Jaksa Cvitanic
Credit: V. Urfalian

Jaksa Cvitanic, the Richard N. Merkin Professor of Mathematical Finance at Caltech, has been appointed director of The Ronald and Maxine Linde Institute of Economic and Management Sciences. An expert in financial economics, Cvitanic has interest in the Linde Institute's three core research areas: namely finance, entrepreneurship, and the interaction between computer science and economics. His goal in the role is to help the Institute continue to create an environment for interdisciplinary and creative research and education in business and economics.

Cvitanic grew up in Croatia and earned his BS and MS, both in mathematics, from the University of Zagreb. He came to the United States in 1988 and completed his MPhil and PhD in statistics at Columbia University. Later, he held faculty positions at Columbia University and at the University of Southern California (USC) before coming to Caltech in 2005. He was named the Merkin Professor in 2013.

We recently sat down with Cvitanic to talk about his work and his new position with the Linde Institute.

What would you say that you bring to the Linde Institute as its new director?

My primary field of research is finance and financial economics, so I bring experience in that regard. If you consider the Linde Institute's main topics of interest, I have done research in two of them—finance and the intersection between computer science and economics. And while I don't actually conduct research in entrepreneurship, I am involved in a start-up company that gives advice on how to allocate your money across different asset classes, so I am certainly interested in entrepreneurship from that point of view.

Also regarding the educational goals of the Linde Institute—to give students the opportunity to learn the skills they would need for careers in business and economics—I have been the adviser for the business, economics, and management (BEM) major at Caltech for 10 years. Many BEM students go on to work in finance, so I have a good perspective on the undergraduates' interest in these fields.

Can you tell us about the work you have been doing at the intersection between computer science and economics?

I have been looking at the question of how to design surveys and survey questionnaires to incentivize respondents to provide careful and truthful responses. In economics, people think of this as mechanism design, but it's also related to what computer science researchers call proper scoring rules, which involves assigning scores to survey respondents computed from their responses and rewarding them based on those scores.

Can you give us an example of how this works?

We are now working with a company that conducts market research surveys. They find that when respondents are asked to rate a retailer on a scale from 1 to 10 (with 10 being the best) most people naturally choose 10—just kind of being nice. Well, that doesn't provide much information.

We suggest asking respondents a second question that is related to the opinion of others—basically, "What do you think other people would choose?" There are algorithms that allow us to combine these two responses—what is your rating and what do you think other people think—in such a way that a respondent's score will be higher if he or she responds truthfully. If you reward them based on this score, they have an incentive to be more truthful in their responses.

How do you reward someone based on truthfulness?

Traditionally in these surveys, you pay all of the participants the same amount. But we tell participants from the outset that they will be paid according to their score and that their score is likely to be higher if they respond carefully and truthfully. This gives them an incentive to essentially compete against one another by being more careful in their responses.

What projects in finance are you currently working on?

One project has to do with optimal compensation of managers, whether they are portfolio managers or high-level executives. The existing theory assumes that a manager affects only the mean return of a project he is in charge of—not the risk. And in many cases, that's simply not true. For example, a CEO might decide to finance her company by issuing a certain amount of debt versus stocks or to hedge risks by using financial instruments. A portfolio manager or high-level executive whose actions result in lower risk and higher returns should be compensated differently than one whose actions result in higher risk and lower returns. We developed a new method for determining how best to compensate these managers.

In related but separate work, I am working with Lawrence Jin, a new faculty member in social sciences. We are looking at these problems in the case in which the manager or shareholders might have some behavioral preferences.

What types of preferences are those?

There is a whole field of behavioral finance and economics that studies how people make decisions. The classical theory before behavioral assumed that everyone was just super rational. Behavioral economics and finance realized, through experiments mostly, that people don't behave that way. There are many ways in which people depart from rational actions. So we're saying that you have to model the preferences of the manager to know the best way to provide incentives and compensate him—whether it's better to give him stocks in the company or cash, or a bonus or options. It depends really on how he values those things, and his preferences may be behavioral.

We are curious how you transitioned from originally studying mathematics and statistics to becoming an expert in finance?

When I came to the U.S. from the former Yugoslavia, the latter was a communist country, and I didn't know what a stock or a bond was, not to mention financial derivatives. But my PhD adviser was one of the fathers of financial mathematics, and he asked if I would mind working on a problem in finance. Today, finance is popular in applied mathematics, but at that time it wasn't considered real mathematics. I was fascinated by the subject and was happy to work on a mathematical problem that had finance applications.

At that time, not many people knew the techniques and the models that were being used for this type of finance, so I started receiving articles from top finance journals to referee. I kind of learned finance by refereeing papers for top journals, which is not a bad way to learn things.

Then, when I came to USC, I was in the math and economics departments, but I found out that a friend of mine from Columbia was teaching in the business school at USC. We started working together, and I learned a lot working with him. So it was partly my PhD thesis, partly refereeing people's papers, and partly doing joint research with a coauthor from a finance department.

What is your vision for the Linde Institute going forward?

One thing that I think has been very successful and that I would like to see us expand are the career panels. We bring Caltech alumni back to talk to students about their experiences working in a particular field. We've done this for finance, and we're thinking about doing this for the consulting industry and entrepreneurship. It works very well. The students are interested in seeing what types of careers former Caltech students have, and the alumni enjoy interacting with the students. We plan to host more of these events.

The Linde Institute has also been sponsoring graduate students and postdocs, as well as workshops in the fields of interest, and will continue doing that. Moreover, we have started the Summer Undergraduate Startup Internship (SUSI) program for Caltech undergraduate students under the supervision of Associate Professor of Finance and Entrepreneurship Michael Ewens. The program places Caltech students in high-quality startup companies for 10-week paid internships during which they gain exposure to the unique start-up environment and have the opportunity to develop their skills in a real-world business setting.

In the future, we would also like to help fund particularly original and creative faculty projects for which it would otherwise be hard to find funds. These would be projects in the social sciences crossed with engineering and computer science. Interdisciplinarity would definitely be an important component.

To summarize, it's about creating an environment in which interdisciplinary, original research involving the social sciences and quantitative fields can thrive. It boils down to two things, which are education and research. There are not many places like Caltech where students can learn from experts from so many different fields in a natural way. They don't have to put in a lot of effort to be interdisciplinary. It's just already there. The Linde Institute is emphasizing that even more and providing infrastructure for it.

Living—and Giving—the Caltech Dream

$
0
0
News Writer: 
Alyce Nicolo
Credit: Lance Hayashida/Caltech

Growing up in Tehran, Iran, Mory Gharib (PhD '83) attended large, crowded schools. He was the kid who always raised his hand in class and asked tough questions. He craved one-on-one time with his teachers, which seldom came to pass.

So when the young Gharib read a newspaper article about a school in California with a three-to-one student-faculty ratio, it seemed almost unimaginable. Over the years, though, that school—Caltech—remained in his thoughts.

Years later, Gharib finally made it to Caltech as a graduate student. Since that time, he has built a distinguished career as a  researcher, mentor, inventor, entrepreneur, leader, and benefactor. And he has continued to search for the answers to tough questions.

"I couldn't have done this anywhere else," he says, referring to his career. "Caltech took care of me, and I have to take care of it."

In appreciation for the opportunities Caltech afforded him, Gharib—who currently serves as the Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering, director of Caltech's Graduate Aerospace Laboratories, and vice provost—has created an endowed fellowship fund to support new generations of Caltech graduate students.

Read the full story on the Caltech Giving website.

Why We Do What We Do: A Conversation with Omer Tamuz

$
0
0
News Writer: 
Lori Dajose
Omer Tamuz
Credit: Caltech

Omer Tamuz, a newly arrived assistant professor of economics and mathematics, studies how people make decisions based on what they know and what they don't know, and how they exchange this information with one another. We sat down with him to talk about mathematical models of behavior, and life as a new member of the Caltech faculty.

Tell us about your research.

I study how people exchange information and learn from each other, in a theoretical sense. We make assumptions about how people behave and we try to model this behavior with math. For example, we generally assume that people will behave rationally—which means, they will always make the optimal choice given the information they have. While this assumption is not always true in reality, it is the framework I work in. There is this huge, rich theory you can build, with unending depth and interesting turns and twists and beautiful math, and very non-trivial things going on that you can learn… and maybe sometimes this assumption is not so outrageous and can give us insights about the real world.

I study these things in a very abstract way. There's a thing that you either know or don't know—let's call that 0 or 1—and a thing that you can do or not do. Let's also call that 0 or 1. You want to match the thing you do to the thing you know.

What I want to know is: what information are people getting, and how does this match up the actions that they take? Creating a general theory for this behavior should be applicable to many different scenarios.

Can you give us an example of this in our daily lives?

Here's one example I like. It is due to Robert Aumann, a professor at the Hebrew University of Jerusalem who won the 2005 Nobel Memorial Prize in Economic Sciences. Imagine that you and I are discussing who's going to be the Democratic nominee for president. We both have watched some TV, read some surveys, and seen our friends' opinions on our Facebook feeds. Many people I know are supporting Bernie Sanders, and many people you know are supporting Hillary Clinton. So then I ask you: What do you think is the probability that Hillary gets the nomination? And you say, 0.9, or 90 percent. And I think—wow, but my feed is entirely Bernie, and if you had asked me first, I would have said 30 percent. But now that you've said 90, well… maybe you have some information that I don't know. So I'll revise my probability guess to be 80 percent. And then you follow that same thought process and you revise your opinion. And so forth.

The economic theory says that, in the end, we must agree. If we are two rational people who are making the correct calculations and inferences, there is no scenario where we agree to disagree. It's a very short proof. And it's one of the reasons why Aumann won the Nobel Prize.

You also have a joint appointment in mathematics. Tell us about your research in that field.

My math research is pretty unrelated to my economics work. I study groups—which are the set of symmetries of a certain object. So if I have a cube, I can rotate it many different ways and still get an object that looks exactly the same. All of these rotations—or operations—that I can do to the cube, together they form a group. In general, if you have a symmetric object and you can do something to it that leaves it the same, then those symmetrical operations form a group.

It turns out that there are interesting connections between groups and dynamical systems. Dynamical systems are things that evolve and change over time—a machine and its moving gears, or an ecological system with a changing number of animals, or maybe a physical system of billiard balls that are bouncing around. You can ask a lot of abstract questions about how dynamical systems behave, and there are many connections to groups.

Tell us a little about yourself.

I was born in Israel, and raised part of the time in Israel, Germany, and Austria. I went to an international high school in Vienna, which was a great experience. I did my undergraduate education in Israel in physics and computer science. As an undergrad, my research was in astronomy, looking for exoplanets—planets outside of our solar system. After undergrad, I felt like I needed a break, so I went to work as a software developer for a few years. Then I went to grad school and got a master's and a PhD in math at the Weizmann Institute of Science—also in Israel. After that, I was a postdoc in the MIT math department and at the Microsoft Research lab in Cambridge, MA. And then I came here to Caltech.

What led you to make the transition from astronomy to economics?

My work in astronomy involved a lot of analyzing data, trying to find very faint signals in a lot of background noise. We were trying to come up with new statistical methods to get rid of that noise. I found that I really liked to do that, but I didn't care so much about studying the stars themselves. Later, as I was studying probability in graduate school, the things I was looking at overlapped a lot with economics. In mathematical terms, many economics problems are really questions in probability.

What do you like to do in your free time?

I have two kids so that's where a lot of my free time goes. I like running, and on the weekends we drive up to the mountains and go hiking.

What is it like to be in Southern California?

What I like about Caltech is that it has such a great spirit and culture. Everything is geared to help you do your research. It's exactly the kind of place you want to be in. But I'm still getting used to saying I live in LA… I'm still trying to develop the LA accent. 

The Power of Entanglement: A Conversation with Fernando Brandão

$
0
0
News Writer: 
Lori Dajose
Fernando Brandão
Credit: Courtesy of F. Brandão

Computers are a ubiquitous part of modern technology, utilized in smartphones, cars, kitchen appliances, and more. But there are limits to their power. New faculty member Fernando Brandão, the Bren Professor of Theoretical Physics, studies how quantum computers may someday revolutionize computing and change the world's cryptographic systems.

What do you do?

My research is in quantum information science, a field which seeks to merge two of the biggest discoveries of the last century: quantum mechanics and computer science. Particularly, I am interested in studying quantum entanglement. Entanglement is a special kind of correlations only found in quantum mechanics. We are all familiar with the concept of correlations. For example, the weather in Southern California is pretty well-correlated from one day to the next—if it is sunny today, it will likely be sunny tomorrow. Quantum systems can be correlated in an even stronger way. Entanglement was first seen as a weird feature of quantum mechanics—Einstein famously referred to it as a "spooky action at a distance." But with the advancement of quantum information science, entanglement is now seen as a physical resource that can be used in information processing, such as in quantum cryptography and quantum computing. One part of my research is to develop methods to characterize and quantify entanglement. Another is to find new applications of entanglement, both in quantum information science and in other areas of physics.  

What is a quantum computer?

At the most basic level, computers are made up of millions of simple switches called transistors. Transistors have two states—on or off—which can be represented as the zeroes or ones that make up binary code. With a quantum computer, its basic building blocks (called qubits) can be either a one or a zero, or they can simultaneously exist as a one and a zero. This property is called the superposition principle and, together with entanglement and quantum interference, it is what allows quantum computers to, theoretically, solve certain problems much faster than normal, or "classical," computers could. It will take a long time until we actually have quantum computers, but we are already trying to figure out what they can do.

What is an example of a problem only solvable by a quantum computer?

It is a mathematical fact that any integer number can be factored into the product of prime numbers. For example, 21 can be written as 3 x 7, which are both prime numbers. Factoring a number is pretty straightforward when it is a small number, but factoring a number with a thousand digits would actually take a classical computer billions and billions of years—more time than the age of the universe! However, in 1994 Peter Shor showed that quantum computers would be so powerful that they would be able to factor numbers very quickly. This is important because many current cryptographic systems—the algorithms that protect your credit card information when you make a purchase online, for example—are based on factoring large numbers with the assumption that some codes cannot be cracked for billions of years. Quantum computing would change the way we do cryptography.

What got you interested in quantum information?

During my undergraduate education, I was looking online for interesting things to read, and found some lecture notes about quantum computation which turned out to be by Caltech's John Preskill [Richard P. Feynman Professor of Theoretical Physics]. They are a beautiful set of lecture notes and they were really my first contact with quantum information and, in fact, with quantum mechanics. I have been working in quantum information science ever since. And now that I'm on the Caltech faculty, I have an office right down the hall from Preskill!

What is your background?

I am originally from Brazil. I did my bachelors and masters degrees there in physics, and my PhD at Imperial College London. After that, I moved among London, Brazil, and Switzerland for various postdocs. Then I became faculty at University College London. Last year I was working with the research group at Microsoft, and now I am here at Caltech. The types of problems I have worked on have varied with time, but they are all within quantum information theory. It is stimulating to see how the field has progressed in the past 10 years since I started working on it.  

What are you particularly excited about now that you are at Caltech?

I can't think of a better place than Caltech to do quantum information. There are many people working on it from different angles, for example, in the intersection of quantum information and condensed-matter physics, or high-energy physics. I am very excited that I get to collaborate with them.

What do you like to do in your free time?

I used to go traveling a lot, but six months ago my wife and I had a baby, so he is keeping us busy. Along with work and exercise, that basically takes up all my time.

Developing Realistic Models of Financial Markets: A Conversation with Lawrence Jin

$
0
0
News Writer: 
Kimm Fesenmaier
Lawrence Jin
Credit: Caltech

Lawrence Jin (MS '06) is a new assistant professor of finance at Caltech. Born in Beijing, China, Jin studied physics and mathematics at Tsinghua University, earning bachelor's degrees in both fields in three years. In 2005, he came to the United States and earned a master's degree in electrical engineering at Caltech. Then he tried something completely different, spending a few years working as a research and trading analyst on Wall Street. Ultimately, he opted to pursue academic finance and earned his doctorate in financial economics at Yale University.

Jin's arrival at Caltech marks an important step in building a finance faculty to support Caltech's Business Management option and to expand the research activities of the Ronald and Maxine Linde Institute of Economic and Management Sciences. In the spring term, Jin taught courses in Behavioral Finance (BEM 114) and Asset Pricing Theory (SS 215).

We recently sat down with Jin to discuss the types of problems he is interested in, how psychology and neuroscience can help inform financial models, and what brought him back to Caltech.

What is the focus of your research?

My main research area is behavioral finance, a very active field within the broader field of economics and finance. We try to develop psychologically plausible and realistic models to better understand financial markets.

When you say "behavioral" in this context, what do you mean?

Actual human behavior. For instance, people are not necessarily making fully rational decisions using all the available information they can possibly obtain; they cannot pay attention to every single thing they encounter. And they have some psychological biases when forming opinions of the financial market.

The key questions are: In what way are people systematically irrational? And how can we model irrationality and its interaction with other economic forces to better understand financial markets? When addressing these questions, we try to discipline ourselves by understanding people's behavior through the lens of psychology, behavioral sciences, biology, and neuroscience.

Can you give some examples of the types of problems you study?

One example is to understand stock market fluctuations and other asset pricing phenomena through the lens of psychological biases.

One type of psychological bias is something called sample-size neglect, the notion that many investors mistakenly think small samples can be just as representative as large samples. In other words, investors tend to draw a conclusion too quickly. For instance, if you see a sequence of good stock market returns, it may just be random. But if you think that you are actually seeing a trend, you might positively revise your expectations of future returns. And it turns out that you revised your expectations too quickly. This is an example that links sample-size neglect to a finance application.

It is important to note that the behavioral approach to study financial markets is not an isolated approach. Sometimes it interacts with other things like financial frictions.

What are financial frictions in this context?

In financial markets, there are many frictions. For instance, ordinary households typically do not invest directly in opaque markets such as the market of mortgage-backed securities. Instead, they invest through mutual fund managers. With this specific structure, some conflicts of interest may arise. For example, mutual fund managers might care more about making money for themselves than helping their clients. Such frictions ultimately could interact with behavioral biases, and they can amplify each other, especially during bad times like financial crises.

Another example is transaction costs that you need to pay a broker when you buy and sell stocks and bonds. One finding in finance is that when individual investors decide to actively manage their own stock portfolios, on average, they underperform. In other words, if you instead give your money to an index fund, you are likely to do better.

Financial economists try to understand why individual investors trade so much on their own even though they underperform index funds.

One behavioral explanation is that investors may be overconfident in their ability to invest. There is a very nice paper by Mark Grinblatt [UCLA Anderson School of Management] and Matti Keloharju [Aalto University School of Business in Finland] that uses data from Finland to show that people with a higher level of overconfidence trade more. In Finland, all of the 18-year-old male citizens are required to go into the military. When they do so, they take aptitude tests and behavioral tests. Overconfidence in this paper is measured as their self-reported confidence based on the behavioral tests minus how confident they should be based on their performance on the aptitude tests. The interesting thing is that this measurement of overconfidence predicts how frequent people trade stocks several years later when they open their brokerage accounts. And those who are more overconfident trade more and have poorer trading performance.

Understanding behavioral biases such as overconfidence is helpful not only for understanding financial markets, but also for helping people to make better decisions—things like saving more money for retirement and keeping their jobs.

Can you share some results from some of your recent work?

I am very interested in understanding the origin of financial bubbles and crashes. We study why bubbles—for instance, housing bubbles—arise in the first place and also, along with the formation of bubbles, why people begin to trade more. How long is a bubble going to last? When and why do bubbles eventually crash? And what are the consequences?

Our model is trying to answer these questions through something called extrapolative expectations. Consistent with the sample-size neglect we discussed earlier, extrapolative expectation is the notion that after seeing a sequence of good stock returns, many real-world investors tend to believe that the stock market is going to keep rising in value

In this model, a fundamental shock is needed for a bubble to start—something like good news about the market. Those signals create a positive price impact on the market, so market prices go up. Then our extrapolators, people who have these extrapolative expectations, start to get more and more excited. They take the initial increase in market prices too seriously, and their self-enforcing beliefs end up helping prices to keep going up. However, as the initial good news that got people excited in the first place recede into the distant past, extrapolators' irrational exuberance diminishes, and the whole bubble unravels—you get a crash.

How does the idea of frenzied trading come into play?

There is lots of empirical evidence that suggests when bubbles occur, you see a lot of trading in financial markets: investors buy and sell lots of stocks. Economists have had a hard time explaining this.

We have this idea, supported by some neuroscience studies, that when investors are looking at a stock market as a bubble is being created, their trading decisions are influenced by two conflicting signals. On the one hand, investors see a positive trend, and their extrapolative expectations tell them that the price is going to keep going up. This is what we call a growth signal. On the other hand, investors are also aware of the fact that the stock market may be overvalued, and therefore it may crash in the near future. We call this a value signal. Given that the value signal and the growth signal typically tell investors to trade in the opposite direction, investors may slightly change the weights they put on these conflicting signals over time¾we call these changes in weight "wavering." 

As the bubble develops, these two signals endogenously become very large, or extreme.

And as a result, a small degree of wavering could generate a lot of trading volume.

Intuitively, you can think of the value signal and growth signal as two voices in your head telling you different things. During normal periods, the voices are speaking in pretty low tones. In this case, you may be wavering, but that does not change your actions much. But during bubble periods, it is like you have two crazy voices screaming at you, telling you radically different things. Then even the same small degree of wavering is going to cause lots of trading.

The idea of wavering we came up with turns out to be very helpful in generating lots of trading volume during bubbles.

Do you plan to collaborate with anyone in particular at Caltech?

On the one hand, my research is very structural and mathematical. On the other hand, it requires very good intuition about financial markets. To get that intuition right, sometimes you need to work with psychologists, neuroscientists, biologists, and behavioral economists. Caltech has a lot of strength in these other areas that can definitely help me to build my research.

Were there any other factors that led you back to Caltech? 

I really like the idea of having an impact on talented students, and Caltech clearly has very high-quality undergraduate and graduate students. I think it is going to be fun to teach the students and do research with them.

The Arc of Abolition

$
0
0
News Writer: 
Lori Dajose
photo of Sarah Gronningsater
Assistant Professor of History Sarah Gronningsater
Credit: Caltech

Assistant Professor of History Sarah Gronningsater, who grew up in New York City, played on the same playground nearly every day of her childhood. She did not know it as a child, but from the 1820s to the 1850s, a group of free African American households lived on the very site of that playground. Gronningsater sees this as a striking coincidence, as she is now a historian studying 18th and 19th century American history, with a particular focus on African Americans in New York State. We sat down with her to discuss her passion for the intertwined political, social, and legal history of African Americans in the United States.

Your work is centered on the 18th and 19th century. What in particular do you focus on within that time period?

My focus is on slavery and emancipation in the American North. I examine the states where slavery was abolished before the Civil War through political and legal processes that took decades. I'm very interested in how these processes unfolded, especially in New York State. In particular, I focus on the children of slaves. Between 1780 and 1804, five northern states, including New York, passed laws freeing children who were born to slaves after a certain date. Although these children were technically born free, they had to work as servants for their mother's masters until adulthood. Because they were born into this "in-between" status, they were particularly attuned to the workings of law and politics. They, and their parents, wanted to be sure that masters did not cheat the system. They also tried to find ways to gain their freedom earlier than the state's laws originally promised. As these children grew up, not only did they stay legally and politically focused, but they used their wisdom and experience to start protesting against slavery more broadly in the United States. There was this entire generation of children who entered adulthood very politically and legally active.

I ultimately argue that these children helped topple slavery throughout the United States. They became adults in the decades before the Civil War and actively pushed northern politicians to become more anti-slavery. They did everything possible to help southern slaves achieve freedom, through the Underground Railroad and also by instigating court cases and convincing lawmakers to pass stronger anti-slavery laws. They knew from their experiences as children how to use law and politics to get what they wanted. I'm currently working on a book describing these findings titled The Arc of Abolition: The Children of Gradual Emancipation and the Origins of National Freedom.

How did you become interested in this topic?

I've always loved African American literature and history—I've read a lot about extraordinary people doing extraordinary things. I majored in history and literature at Harvard and wrote my undergraduate thesis about the Massachusetts 54th regiment, one of the first northern regiments of black soldiers. I wondered, "Why were these free black men in Massachusetts? Why did they want to do this?" That's where my interest in northern black politics started, so the passion has been longstanding. After college, I got a master's in American history at Oxford and my doctorate at the University of Chicago, where I studied American history with Tom Holt and Amy Stanley, two outstanding scholars of slavery, emancipation, law, and politics.

What does a historian's day-to-day schedule look like?

Even though we now have so much digitized material, the work of a historian still requires the ability to access archives in person. You can find important material on law and politics in newspapers, but many of the newspapers in, say, tiny towns in rural New York, have not been digitized. Those local sources hold some of the best evidence supporting my arguments, such as details about small legal cases involving slaves or locally important political controversies. My work requires me to drive around and visit county clerks' offices and very small historical societies, which may only be open one day a week. You have to be willing to pound the pavement a little.

Will you be teaching any courses this year?

I will be teaching three courses. One is on early American rebellions and revolutions, beginning in the 1600s with conflicts like Bacon's Rebellion and continuing through the American Revolution. Another is on the general history of the 19th-century United States. And finally a course on the history of U.S. baseball from the 1840s to the present. The history of baseball is fun and fascinating, and it's a way to uncover all sorts of important developments in American history, whether it be urbanization, or issues of immigration, gender, or desegregation in sports.

Why does your work focus on New York?

When I started graduate school, I did not plan on working on New York history specifically. But I soon learned that New York State was the site of largest emancipation of American slaves before the Civil War, and there were archives full of fascinating information about the children of gradual emancipation. I do particularly appreciate working on the history of a state where my family has deep roots. My family has lived throughout the state for many generations. It has been fun as I'm writing my book to think about the events that my great-great-great-grandfather might have attended when he lived in the "Burned Over District" in western New York, for example. And, in an amazing connection that I only discovered after the fact, the playground that I used to play in as a child was actually a site where a group of free African American households had settled before Central Park was established.

What excites you about being in Pasadena?

I love hiking, and this is one of the only places I've ever lived where you can be outside every day of the year. You can be in the mountains in a hop, skip, and a jump.

Is there anything you are particularly excited for at Caltech?

One of the reasons I was incredibly excited to take this position is that Caltech is a place that values and supports research, so it's a wonderful place to get intellectual support. But at the same time, it's also home to a small undergraduate population of bright, hardworking students. I love teaching and being in the classroom, and I've actually particularly enjoyed teaching students who are interested in math and science. I like working with students who are very intellectually capable, but might not love history. Yet.


Practical Mathematics: An Interview with Andrew Stuart

$
0
0
News Writer: 
Robert Perkins
Andrew Stuart
Andrew Stuart
Credit: Credit: Caltech

New Caltech faculty member Andrew Stuart is interested in how the current era of data acquisition interacts with centuries of human intellectual development of mathematical models that describe the world around us. As an applied mathematician in the Division of Engineering and Applied Science (EAS), he generates the mathematical and algorithmic frameworks that allow researchers to interface data with mathematical models. His work is informed by—and has applications for—diverse arenas such as weather prediction, carbon sequestration, personalized medicine, and crowd forecasting. Originally from London, Stuart earned his bachelor's degree at Bristol University and then a combined master's/PhD at Oxford University. He worked as a postdoc at MIT in the late '80s, as a lecturer at the University of Bath in England from 1989 to 1992, and then as professor at Stanford University and the University of Warwick in England. He relocated to Southern California this summer. Recently, Stuart answered a few questions about his research and his new life at Caltech.

What brought you to Caltech?

The high quality research in engineering and applied science as well as the high quality of undergraduate and graduate students. I'm excited by the opportunity to develop my mathematical research in new directions, both in terms of applications and in terms of underpinning mathematical methodologies. There's an undeniable beauty to pure mathematics, but what has always driven my interests in mathematics is the potential for diverse applications, and the role of mathematics in unifying these different fields. Caltech provides enormous potential for collaboration in areas of interest to me, in the EAS and Geology and Planetary Sciences divisions for example, and also at JPL.

For example?

Weather forecasting. Netwon's laws, describing conservation of mass, momentum and energy, in principle have enormous predictive power. But lack of precise knowledge of the initial state of the atmosphere, together with physical effects on scales too small to resolve efficiently on the computer, mean that the "butterfly effect" (in which small changes in complex systems ultimately yield major effects) can lead to poor forecasts. Data provides a potential resolution to this problem, or at least an amelioration of it. Right now we have satellites, aircraft, and weather balloons all collecting vast amounts of data; figuring out how best to use these data can substantially improve the accuracy of our forecasting. A lot of good applied mathematics is about formulating the right problems, as well as finding algorithms for solving them.

How did you get into your field?

I grew up in an academic household; I saw that it was a challenging, stimulating, and intellectually rewarding career. My dad, who worked at Imperial College in fluid mechanics, loved his job and I was very aware of this. I then developed an excitement for mathematics that grew once I started majoring in the field as an undergraduate student.

What are you looking forward to about being in Southern California?

The great combination of urban culture and outdoors life. I enjoy cinema, art, reading novels, and hiking. Recently I have been to Kings Canyon and Sequoia, and I have also visited MOCA (Museum of Contemporary Art) Grand Avenue.


Human Fear and the Social Brain: A Conversation with Dean Mobbs

$
0
0
News Writer: 
Lori Dajose
Dean Mobbs
Dean Mobbs
Credit: Caltech

Dean Mobbs, a new assistant professor of cognitive neuroscience, studies what happens in our brains when we interact with others and when we are under threat. Mobbs, a native of Kettering, England, received his PhD from University College London and was an assistant professor at Columbia University before arriving at Caltech this fall. Having once worked as a research assistant at Stanford University, Mobbs is no stranger to the West Coast. We sat down with him to discuss the difference between fear and anxiety, the idea of safety in numbers, and his return to California after 12 years.

What is your research focus within neuroscience?

I focus on two areas. The first is using brain imaging to study neural responses to ecologically defined threats. We use fMRI [functional magnetic resonance imaging] and virtual games to put people in various situations—for example, one where they have to escape from a virtual predator, or where a predator is absent, but could appear at any time. These studies show that a potential threat—something that may happen in the near or distant future—evokes neural circuits associated with anxiety. This is in contrast to when a subject is presented with a threat that is present, which evokes different neural circuits that are associated with fear.

We also study the neural basis of social interaction—what happens when you place people into a social environment and how that alters their emotions. Animals live in groups, which is the most common way to protect yourself as an animal. In ecology, this is called risk dilution—or, simply put, "safety in numbers." So we study situations when people are under threat alone versus when they are with two and three other people. We've looked at groups as large as 15 people, and we find that the larger the group, the less fear people feel when they are in threatening situations.

What has your academic path been like?

For many years, I was working as a house painter in the United Kingdom. Coming from a working-class background, my younger brother—a psychiatrist in Oregon—and I are the only ones in my family who have gone to university. Therefore, my path has been defined as overcoming negative expectations and navigating a system that was closed to people of my geography and class.

I returned to school in my mid-twenties, obtaining a bachelor's degree in psychology from the University of Birmingham. This was followed by a research assistant position at Stanford University, studying neurogenetic disorders. In particular, I was looking at people with Williams Syndrome, which is characterized by an extreme propensity to be social despite other developmental deficits like low IQ. I then did my PhD at University College London where I studied the neural basis of emotion. I followed my PhD with a postdoctoral fellowship at the Medical Research Council in Cambridge, and was also a research fellow at Clare Hall in the University of Cambridge.

After my PhD, I continued to refine my research question concerning the neural basis of ecologically defined threats. We looked at the neural effects of distant threats versus close ones—for example, tarantulas—how people "choke" or make mistakes under pressure, how envy increases our enjoyment at others' misfortunes, and the neural basis of vicarious reward or why we find it rewarding to see others win money.

My path through neuroscience was motivated because I fell in love with clever experiments in social psychology and affective science. That was around the time when psychology was becoming more biological because of brain imaging. Since I've been a PI, I have been merging these fields.

What excites you about being at Caltech?

What excites me about Caltech is the intellectual environment. It's a joy to work here. I am also excited by the approaches that the economists take. In my opinion, the best social neuroscience research takes an economic approach, because it uses well-established economic models and game theory, and applies mathematical models to decision-making processes. Coming from a psychology background, I have the opportunity to interact with people who have different ways of thinking about these questions and take a broad approach to decision making—researchers in political science, psychology, neuroscience—and to bounce ideas off of a rich, diverse pool of people.

What do you like to do in your free time?

I have a 17-month-old daughter at home so mostly I am enjoying being a father. I also love taking trips to explore California; it is truly an amazing part of the world, and I don't think I've stopped smiling since I've arrived. 

Engineering Nanodevices to Store Information the Quantum Way

$
0
0
News Writer: 
Jessica Stoller-Conrad
Stevan Nadj-Perge
Stevan Nadj-Perge, assistant professor of applied physics and materials science
Credit: Photo courtesy of S. Nadj-Perge

Creating quantum computers which some people believe will be the next generation of computers, with the ability to outperform machines based on conventional technology—depends upon harnessing the principles of quantum mechanics, or the physics that governs the behavior of particles at the subatomic scale. Entanglement—a concept that Albert Einstein once called "spooky action at a distance"—is integral to quantum computing, as it allows two physically separated particles to store and exchange information.

Stevan Nadj-Perge, assistant professor of applied physics and materials science, is interested in creating a device that could harness the power of entangled particles within a usable technology. However, one barrier to the development of quantum computing is decoherence, or the tendency of outside noise to destroy the quantum properties of a quantum computing device and ruin its ability to store information.

Nadj-Perge, who is originally from Serbia, received his undergraduate degree from Belgrade University and his PhD from Delft University of Technology in the Netherlands. He received a Marie Curie Fellowship in 2011, and joined the Caltech Division of Engineering and Applied Science in January after completing postdoctoral appointments at Princeton and Delft.

He recently talked with us about how his experimental work aims to resolve the problem of decoherence.

What is the overall goal of your research?

A large part of my research is focused on finding ways to store and process quantum information. Typically, if you have a quantum system, it loses its coherent properties—and therefore, its ability to store quantum information—very quickly. Quantum information is very fragile and even the smallest amount of external noise messes up quantum states. This is true for all quantum systems. There are various schemes that tackle this problem and postpone decoherence, but the one that I'm most interested in involves Majorana fermions. These particles were proposed to exist in nature almost eighty years ago but interestingly were never found.

Relatively recently theorists figured out how to engineer these particles in the lab. It turns out that, under certain conditions, when you combine certain materials and apply high magnetic fields at very cold temperatures, electrons will form a state that looks exactly as you would expect from Majorana fermions. Furthermore, such engineered states allow you to store quantum information in a way that postpones decoherence. 

How exactly is quantum information stored using these Majorana fermions?

The fascinating property of these particles is that they always come in pairs. If you can store information in a pair of Majorana fermions it will be protected against all of the usual environmental noise that affects quantum states of individual objects. The information is protected because it is not stored in a single particle but in the pair itself. My lab is developing ways to engineer nanodevices which host Majorana fermions. Hopefully one day our devices will find applications in quantum computing.

Why did you want to come to Caltech to do this work?

The concept of engineered Majorana fermions and topological protection was, to a large degree, conceived here at Caltech by Alexei Kiteav [Ronald and Maxine Linde Professor of Theoretical Physics and Mathematics] who is in the physics department. A couple of physicists here at Caltech, Gil Refeal [professor of theoretical physics and executive officer of physics] and Jason Alicea [professor of theoretical physics], are doing theoretical work that is very relevant for my field.

Do you have any collaborations planned here?

Nothing formal, but I've been talking a lot with Gil and Jason. A student of mine also uses resources in the lab of Harry Atwater [Howard Hughes Professor of Applied Physics and Materials Science and director of the Joint Center for Artificial Photosynthesis], who has experience with materials that are potentially useful for our research.

How does that project relate to your lab's work?

There are two-dimensional, or 2-D, materials that are basically very thin sheets of atoms. Graphene—a single layer of carbon atoms—is one example, but you can create single layer sheets of atoms with many materials. Harry Atwater's group is working on solar cells made of a 2-D material. We are thinking of using the same materials and combining them with superconductors—materials that can conduct electricity without releasing heat, sound, or any other form of energy—in order to produce Majorana fermions.

How do you do that?

There are several proposed ways of using 2-D materials to create Majorana fermions. The majority of these materials have a strong spin-orbit coupling—an interaction of a particle's spin with its motion—which is one of the key ingredients for creating Majoranas. Also some of the 2-D materials can become superconductors at low temperatures. One of the ideas that we are seriously considering is using a 2-D material as a substrate on which we could build atomic chains that will host Majorana fermions.  

What got you interested in science when you were young?

I don't come from a family of scientists; my father is an engineer and my mother is an administrative worker. But my father first got me interested in science. As an engineer, he was always solving something and he brought home some of the problems he was working. I worked with him and picked it up at an early age.

How are you adjusting to life in California?

Well, I like being outdoors, and here we have the mountains and the beach and it's really amazing. The weather here is so much better than the other places I've lived. If you want to get the impression of what the weather in the Netherlands is like, you just replace the number of sunny days here with the number of rainy days there.

 

Garnet Chan Talks Quantum Chemistry and Chinese Food

$
0
0
News Writer: 
Whitney Clavin
Garnet Chan
Garnet Chan, Bren Professor of Chemistry at Caltech
Credit: Caltech

Garnet Chan, Bren Professor of Chemistry, recently moved to Pasadena from New Jersey, where he was a professor at Princeton University for the past four years. Chan's specialty is quantum chemistry, a field pioneered at Caltech by the late Linus Pauling to understand the behavior of molecules. Raised in Hong Kong, Chan earned his bachelor's degree (1996) and PhD (2000) from the University of Cambridge, then was a Miller Fellow at UC Berkeley before taking a faculty position at Cornell University.

Chan sat down with us to discuss his move to Pasadena and his excitement over the Chinese culinary delights the area has to offer—and to answer a question he's heard before: What exactly does a quantum chemist do?

How do you describe the big picture of what you do?

Broadly speaking, I'm a theorist, and I'm interested in going from the very simple equations of quantum mechanics—which are the fundamental equations of nature, the most basic equations we know about the world—to the actual behavior of molecules and materials and real matter that we can touch around us. It's a discipline that involves finding computer algorithms that allow us to simulate these equations, at least approximately.

What makes me a quantum chemist as opposed to another kind of researcher working with quantum mechanics is that the problems I'm interested in are the ones that chemists study. These can be very concrete things like what steps are involved when an enzyme catalyzes a reaction, or what makes a material absorb a specific frequency of light. Basically, we are trying to simulate complex chemistry.

What problems are you specifically working on?

One problem we are working on is the problem of high-temperature superconductivity, which has been a mystery for 30 years. Superconductivity is the name given to the phenomenon where if you lower the temperature sufficiently in a material, you'll reach a point where the resistance to electric current all of a sudden goes to zero. We then say that the material is superconducting. In a certain class of materials called high-temperature superconductors, you do not have to lower the temperature very much. You still have to lower the temperature to minus 140 degrees Celsius. It seems cold, but that's equivalent to 130 to 140 Kelvin, and most materials are only superconducting up to about 10 Kelvin. Even though high-temperature superconductors were discovered 30 years ago, we still don't know how they work.

I have kind of an attachment to this problem because I like problems that people have banged their heads on for decades, and in many cases given up on solving. I think many people would agree that this is probably one of the single most perplexing questions about materials. The real thing that has changed in the last 30 years is the development of new computational tools for quantum mechanics. You used to solve problems by having some inspired guess. Our hope now is that we don't have to make such an inspired guess because we can get at least some of the way there by computation.

I've in some sense worked 15 years trying to build up a set of tools that can address the different challenges involved in simulating these materials. We've recently achieved success in simulating simplified models of the materials. By simplification, one can think of it as like trying to simulate the planets in the solar system, but with some of the planets taken out. We can now can simulate the models to very high precision, and you can see behavior very similar to the real high-temperature superconductors. This gives us confidence that we can soon understand what is happening in the real materials.

Do you have any other projects?

Another one of my interests is related to the biological mechanism by which enzymes "fix" nitrogen. Most of the nitrogen on Earth is in the air as nitrogen gas [N2], but humans can't process nitrogen gas. Instead, we get much of our nitrogen indirectly from fertilizer—or from bacteria. Certain bacteria have an enzyme that naturally "fixes" nitrogen, which means that it is converted into ammonia or related compounds that fertilize plants. The plants make amino acids, and we eat the plants—or animals eat the plants and we eat the animals. In the end, the nitrogen gets into us.

What makes the biological process so fascinating is that it is able to proceed under ambient biological conditions, while industrial fertilizer production, via the Haber process, proceeds at high temperatures and pressures, and consumes enormous amounts of energy. This means that the biological enzymes are doing some very clever chemistry.  We hope to unravel the details of this process using the principles of quantum mechanics. We've recently uncovered some unexpected behavior of the electrons in these enzymes. Perhaps the answer to how they work lies there!

What happens after you simulate the chemistry for reactions like this?

The results of computational chemistry simulations are used by many chemists, not just theorists like me. In fact, these days a very large number of experimental papers have quantum chemical calculations in them to help interpret the results—in this sense, there is a very healthy interplay between theory and experiment. However, I see the role of our simulations as having impact beyond the specific problems that we choose to study. That is because the tools that we are building to perform our simulations help push the frontier of the types of chemistry and reactions that people can study. Eventually, these tools will be usable by all chemists, and I hope they can be used to study all of chemistry.

Quantum chemistry has always evolved to make new tools to answer more and more complicated questions. In the beginning, in the 1920s and 1930s, people were mainly studying atoms. Later, they studied molecules and what holds them together—Linus Pauling, who was a professor here, started this type of work. These days we are working at a frontier where the tools are being developed to study the most complex problems of biology and materials.

What are you most excited about in coming to Caltech?

I'm completely sold on this place. People here are focused on science, so this is exactly the right place for me. I also like the scale. It's so small that you really feel like you're in some family. Certainly the chemistry department feels like a very tight-knit community.

What do you like about Southern California?

I think there's a reason why so many people live in Southern California. It doesn't get better than this. You have great weather. There's lots of good food. People complain about traffic, but I lived in New Jersey and traffic there is terrible. Also, this area of the country has probably the best Chinese food. There are hundreds of good Chinese restaurants in the cities of San Gabriel, Monterey Park, and Alhambra. Food is such a big part of all cultures, but certainly a big part of Chinese culture, so that's a big plus.

Taking Flight: An Interview with Soon-Jo Chung

$
0
0
News Writer: 
Robert Perkins
photo of Soon-Jo Chung
Soon-Jo Chung
Credit: Caltech

New Caltech faculty member Soon-Jo Chung splits his time between Caltech's campus, where he is a Bren Scholar and an associate professor of aerospace in the Graduate Aerospace Laboratories of the California Institute of Technology (GALCIT), and NASA's Jet Propulsion Laboratory (JPL), where he is a research scientist. His work ranges from the creation of a robotic bat with flexible wings and realistic flight dynamics to the control of swarms of small satellites to the development of computer-vision-based navigation systems. Originally from Seoul, South Korea, Chung earned his bachelor's degree at the Korea Advanced Institute of Science and Technology (KAIST), followed by master's and doctoral degrees from the Massachusetts Institute of Technology (MIT). For most of the past decade, Chung was a faculty member in aerospace engineering at the University of Illinois at Urbana-Champaign—visiting California each summer between 2010 and 2014 as a JPL summer faculty research fellow working on distributed small satellites. He returned to Southern California in August. Recently, Chung answered a few questions about his life and work.

What brought you to Caltech?

Caltech's GALCIT has been at the center of aerospace innovation. As an aerospace enthusiast, it was my dream to work at Caltech and JPL. Also, there is a focus on space engineering at Caltech (for example, Sergio Pellegrino's work) that creates a great opportunity for someone like me. Another parochial and nerdy view is that I truly enjoy being in this close-knit intellectual community since my degrees are all from some institute of technology.

Why are you interested in swarm robotics? What can swarms do that individual robots cannot?

When I deliver a research presentation, I tend to show this fascinating video clip of millions of micro robots autonomously transforming themselves into a single structure in Disney's animated movie Big Hero 6. In fact, achieving such a capability using hundreds to millions of autonomous tiny spacecraft has been one of my research focus areas. You can reconfigure your swarm system to another shape quite easily; think about autonomous flying LEGO blocks that can build whatever you imagine. Also, the entire system doesn't fail even if you lose a handful of individual robots from the swarm. In essence, swarms are more flexible, more robust, and possibly more capable than a monolithic system. Applications are limitless, as said in the movie!

Why did you choose to create a bat-like drone? What advantages does it offer?

I started working on robotic flapping flight simply because I realized I could apply work I'd originally done to control multiple spacecraft in Earth orbit to synchronous flapping wing motions of birds and bats. Then, as I read more about animal flight and watched them in action, I got fascinated with the beautiful maneuvers of flying animals with flexible articulated wings. Arguably, the bat is one of the most advanced animal flyers with its capabilities to make sharp turns and perform upside-down perching. The dynamics of bat flight is even more complex and elegant because of the bat's soft membrane wings. I also wanted to challenge the status quo of drones that predominantly use high-speed rotor blades, which are quite noisy and dangerous. The goal is to build a safe, energy-efficient, soft-winged robot that can fly like a bat.

What excites you the most about the future of autonomous vehicles?

I hope Caltech will play an important role in autonomous vehicle research, especially with its Center for Autonomous Systems and Technologies (CAST) which is being led by Mory Gharib. I am envisioning that the future of transportation, especially in big cities, will look quite different and I, along with Mory and several faculty in CAST, are looking into developing a research program on autonomous flying cars that could potentially revolutionize our future transportation systems. For example, why should self-driving cars be restricted to a two-dimensional world? It might be technologically easier to achieve a fully autonomous flying car network than to add self-driving cars to the existing roads since there is no gridlock and there are no pedestrians in the sky.

Viewing all 83 articles
Browse latest View live