Have you ever noticed that, once a given idea gains acceptance in the community, everyone starts seeing its exemplification practically everywhere? I am not saying this is without merit, but I am sure you might agree that there was a time when no one looked at water as an “extra” in DFT calculations. Then water turned into an integral part of virtually every mechanism. Another example is when everyone started showing concerted metalation/deprotonation in C-H activation mechanisms. The list goes on and on.
One particular modern trend that I notice is towards involving single electron transfer in chemical transformations. What’s annoying is when people present this idea as an earth-shattering concept that is without precedent. I would respectfully remind everyone that there were several times in the past just like now. Electron transfer would become fashionable, and a ton of data would emerge to support its profound influence. Then people would forget it and move on, only to rediscover the virtues of moving electrons one at a time a decade or so later. This apparent amnesia gives me an opportunity to comment on an excellent paper from one of my heroes and mentors, the late Professor George A. Olah. Below is a fascinating piece from the ’80s, where Olah considered the generation of alcohols in the course of some Wittig reactions. Single electron transfer is the mechanism that accounts for the formation of the reduced product in this case. Yes, even the venerable Wittig reaction can involve electrons jumping one at a time!
In the 19thcentury chemistry papers people routinely described how they had eaten or smelled the compounds they were making in the lab. By today’s standards, this practice is foolish, although there are enthusiasts who still resort to this dubious methodology. But let’s dig a little deeper. While tasting the products of one’s synthesis is reckless, let’s remember the teachings of Helmholtz. Among many of his original research contributions is an argument about the role of visual sensation in science (http://study.sagepub.com/sites/default/files/Hermann%20von%20Helmholtz.pdf). All of our scientific advances depend on observations of some kind and they are inevitably visual. Appreciation of everything from numerical values to molecular structure depends on how our retina responds to a stimulus. But is it all there is? Is it enough to solely rely on visual stimuli in our attempts to understand the world? Unfortunately, we do not have many options. The sound, taste, and smell are different from vision but they are not as information-rich when it comes to understanding the matter around us. I wonder if there is a civilization out there that has developed advanced senses other than vision to respond to changes around them and extract meaningful information.
From time to time I will write short essays on artificial intelligence (AI). AI is something that I know little about, so please forgive my ignorance. Wait… What I just said already sounds dangerously foolish because, if I am not an expert, I should probably pipe down. But I won’t be quiet because AI is now everywhere and, at its core, it commonly employs linear regression analysis, which has been around forever. At least this is what I would like to believe at this stage.
I am starting to read Bostrom’s “Superintelligence” and it paints a rather gloomy picture that is in store for us if things go astray. The point is that machines may one day surpass humans. Our fate will then depend on the actions of some powerful AI system, but I have a strong feeling that this won’t happen for a while because training sets are created by humans. Having said that, I would also hate our role to be relegated to the generation of those training sets. Don’t laugh: plenty of Edisonian science feeds there, so many of us are at it already.
Here is an uncomfortable thought I just had: a scary moment is a day when our own intelligence will be referred to as some primitive proteinaceous intelligence (PI). As a corollary to that, there will be nothing artificial about AI. Think about it.
As you may recall from basic biochemistry courses, introns represent long stretches of noncoding DNA between exons. The latter are the bread and butter of genetic processes because they carry the code that makes us who we are. Introns (junk DNA) may have a functional role in biology, but no one has been able to show it in any rigorous manner. Now there are definitive accounts that suggest that junk DNA has a meaning. In yeast, introns persist after splicing and appear to be associated with cell growth regulation under stress. You can read about this fascinating find in the link below, but I call upon the research community to fix another gap, namely the mysterious fingerprints region in IR spectra. Let me tell you: I am tired of teaching this material on a yearly basis, regurgitating the apparent dissonance between the name “fingerprint” and the lack of any teachable meaning in that region. What a misnomer. When will we get a handle on that paradox and move beyond discussing broad peaks around 3300 cm-1 (an OH stretch) and 1700 cm-1 (a carbonyl stretch)? At least biologists called the regions they had no clue about for their worth. By way of contrast, we chemists have no idea how to interpret 90% of IR, yet we refer to the corresponding regions as “fingerprints.”
I have to get back to blogging, partly because I miss it and partly because RSC mentioned that I had a blog when they ran an announcement related to my new position as an Associate Editor of Chemical Science (http://blogs.rsc.org/sc/2018/12/20/meetandreiyudinchemicalscienceassociateeditor/?doing_wp_cron=1547228932.9969739913940429687500). Some people are wondering what has been going on, why I am not posting anything. So I guess this is as good a time as ever to get back to writing. Fittingly, we had Prof. Kei Murakami of Nagoya University visit us yesterday as part of his Canadian tour. Kei gave a great talk, and I want to focus on the paper he published not long ago (http://science.sciencemag.org/content/sci/359/6374/435.full.pdf). Apart from seeing an amazingly facile route to nanoribbons, I marvel at the simplicity of the reaction below that had enabled the rapid assembly of the graphene-like building blocks. I wonder why is it that this process has taken so long to reveal itself. Just look at what is happening here and never mind the mechanism, for which we do not have time. My question is how many times in the past, likely all over the world, students must have run a coupling between a biphenyl chloride and some amine, or a boronic acid, or what not. I know that homocoupling is a common by-product of palladium-catalyzed processes, but are you trying to tell me that no one ever ran control experiments in his/her mechanistic studies and did not notice the (homocoupling – 2) mass? Nope, which is why we now have a new reaction. Kei left for Queen’s University after his talk here, and I wish him all the best for what should be a notable career!
In part due to my long-standing interest in heterocycle-driven drug discovery, I was kind of surprised to see this Org. Lett. paper. In this article, Boger and colleagues showcase a fascinating new way of making vinylogous formamides from 1,2,3-triazines. What is curious here is the very fact that triazines can participate in nucleophilic addition reactions. I have seen many attempts to introduce these rings into bioactive substances, but now that triazines have been shown to be excellent electrophiles toward amines, I should adjust my expectations for this class of molecules. In the Boger report, the reactivity of the parent 1,2,3-triazine was exemplified using secondary amines. The preparative sequence is straightforward: mix amine and triazine in THF at room temperature, and off you go. The C4 position is the preferred point of attack, leading to the extrusion of nitrogen gas. Clearly, the preferred delocalization of the negative charge is behind the documented regiochemistry. Orbital considerations are also consistent with this mode of reactivity. On balance, this is a nice method to make vinylogous formamides. It also suggests to never use 1,2,3-triazines as constituents of bioactive structures.
I was visiting Novartis in Basel, Switzerland, over the past 3 days. It is an amazing site, with astounding architecture. They even have a Frank Gehry building, a dinosaur skeleton, and a Japanese restaurant on campus. I went out for dinner with Dr. Fabrice Gallou last night and learned about the “business of cyclosporine A” at Novartis. The graphic below showcases the degree of sophistication achievable with complex molecules. The overall goal of the methylation reaction is to site-selectively cleave cyclosporine, run Edman degradation in order to remove the N-terminal amino acid, couple a new one, and recyclize. This represents a clever way of site-selectively mutate amino acids in complex macrocycles. The procedure discussed in the OPRD article was performed on a 186kg scale but things get even more impressive because the reaction is performed on a multi-ton scale at Novartis. As far as the site of methylation: it corresponds to the most nucleophilic amide. I wonder if more macrocycles do not possess an Achilles heel of this kind, which would allow site-selective chemistry to be applicable in other settings. I doubt it and, even if sites like this were to exist, they would likely be located in turn regions. Of particular interest to me was the description of effort to develop solvent/antisolvent combination in order to get consistent crystallinity of the linearized peptide product. 2-Methyltetrahydrofuran was the solvent and tert-butylmethyl ether was the antisolvent of choice. Go figure. I don’t think we use these two with linear peptides, but what do I know? Just imagine doing this on a multiton scale! Here is a quiz: why is cyclosporine’s OH acetylated? The authors discuss this, but try to guess the reason why having an unprotected OH is not a good idea.