Jump to content
Goodbye Jesus

Last Week Tonight With John Oliver: Scientific Studies


Fweethawt

Recommended Posts

Love that! He hits the nail on the head most times. This is what bugs me when people on ex-c scream science! science! Science/research is only as unbiased as the researchers doing it and those funding it. I always ask myself things like, what was the sample size, what does the data actually say (not just how it was interpreted), has the study been replicated and how many times has it been replicated, and who funded it. I never take any of it at face value.

  • Like 1
Link to comment
Share on other sites

Wow. Oliver is really good here. He nails it. It nicely explains concepts that some of us have been written here. We cannot trust all studies and all popular publication on them. The studies that we must trust are those that have been peer-reviewed and both the studies and the peer-reviewed are published in credible scientific journals.

 

Here are a few highlights from this video that I think is extremely important to be written down. The last few ones that I bolded and typed with red color are the most important.

"...the best process that science has to guard against that is the replication study where other scientists redo your study and see if they get similar results. Unfortunately that happens way less than it should be..."

"... that this is a scientific fact that's actually never been confirmed"

"For all those reasons scientists themselves know not to attach too much significance to individual studies until their place in a much larger context of all the work taking place in that field."

"But too often a small study with nuanced tentative findings gets blown out of all proportion when it’s presented to us the lay public."

 

"There is no way a study that boring can make it to television..."

 

"Except that’s not what the study said. It’s like a game of telephone. The substance gets distorted at every step."

 

"Just because a study is industry funded or its sample size was small or it was done on mice doesn’t mean it automatically flawed but it is something the media reporting on it should probably tell you about."

 

“This is a chart mapping the results of studies things like coffee, eggs and wine. All of them have been linked to raising or lowering your risk of cancer depending on the study. And everything causes cancer is not the conclusion you want to draw from science. It is the conclusion you should draw from logging on to WebMD where that is their motto. Because if I were to tell you about each of those studies in isolation at some point you might reasonable think no one knows anything about what causes cancer. And that is a problem. Because that’s the sort of thing that enable tobacco companies for years to assert the science isn’t in yet."

 

"In science, you don’t just get to cherry pick the parts that justify what you were going to do anyway, that’s religion… This is really dangerous. If we start thinking that scientists a la carte and if you don’t like one study don’t worry another will be along soon that is what leads people to think that man made climate change isn’t real or that vaccines cause autism, both of which the scientific consensus is pretty clear on."

 

"Science is by its nature, imperfect. But it is hugely important and it deserves better than to be twisted out of proportion and turned into morning show gossip."

 

In the portion of the video where he mocked TED talk there was a glimmer of truth being shown “Science is a very slow and rigorous process that does not lend itself easily to sweeping conclusion"

Link to comment
Share on other sites

Love that! He hits the nail on the head most times. This is what bugs me when people on ex-c scream science! science! Science/research is only as unbiased as the researchers doing it and those funding it. I always ask myself things like, what was the sample size, what does the data actually say (not just how it was interpreted), has the study been replicated and how many times has it been replicated, and who funded it. I never take any of it at face value.

 

I think you draw the incorrect conclusion from the video - part of it. In fact Oliver said "Just because a study is industry funded or its sample size was small or it was done on mice doesn’t mean it automatically flawed".

Link to comment
Share on other sites

Science is a method, a tool box so to speak. It is currently the only method that has reliably enabled us to understand the universe and make predictions with any degree of reliability. Unfortunately, a major issue is that many folks simply do not understand what they see when they look at a study. Honestly, many folks do not even look at the actual study, but rather quote "click bait" websites that reference said studies or at best, quote the study abstract. Rarely do I find folks talk about the methods and detailed discussion that revolve around collecting and processing data.

 

Looking at a study can be very difficult, even among folks who are educated and understand the field. Even worse is the fact that a little knowledge can be worse in some cases. For example, many learn that a p of less than 0.05 is good to go and may be prone to simply rely on that as their litmus for determining the validity of a study. This is myopic at best and a poor way to approach such a nuianced topic. For example, what does a p value really mean and how is it different from say, a confidence interval? Furthermore, a study design and protocols are also important to understand.

 

As stated, concepts such as sample size and being honest about study limitations are also important in addition to reproducible results that support or go against the null. It's also important to consider blinding, use of controls, sample selection and other concepts like study designs such as meta analyses, systemic reviews, case controlled studies, cohort studies, longitudinal studies and so on.

 

Often, I find what a study actually says differs from what is reported in the media. It's also important to know when to accept a hypothesis when large, multiple studies do not exist. An example in my field is a therapy known as lipid emulsion to treat overdose from local anaesthetics. There are animal studies and a bunch of case reports and anecdote in humans but no large, randomised, controlled, double blinded, multi center studies supporting lipid emulsion. However, the number of case reports are so compelling that the major players in the toxicology community agree that running a study where people are denied the only therapy that appears to be markedly effective would be equivalent to a moral and ethical catastrophe as many people would likely die in the process of collecting the data.

 

An example regarding a failure to show reproducibility is currently what we see occurring in psychology where many old studies though to be nearly untouchable are turning out to be completely non-reproducible, throwing many aspects of contemporary psychology into a state of uncertainty.

 

This puts the average Jane and Joe in a tough spot because making sense of this stuff is highly nuanced. This is still good as even the most seemingly robust ideas that come from science are tentative and must always be ready to stand the test of falsifiability. Even robust staples like general relativity are still being put to the test.

Link to comment
Share on other sites

 

Love that! He hits the nail on the head most times. This is what bugs me when people on ex-c scream science! science! Science/research is only as unbiased as the researchers doing it and those funding it. I always ask myself things like, what was the sample size, what does the data actually say (not just how it was interpreted), has the study been replicated and how many times has it been replicated, and who funded it. I never take any of it at face value.

I think you draw the incorrect conclusion from the video - part of it. In fact Oliver said "Just because a study is industry funded or its sample size was small or it was done on mice doesn’t mean it automatically flawed".

Indeed. Many small studies set the stage and help us better understand how to develop larger studies. Animals like mice are absolutely critical. A whole field for making connections between animals and humans called allometric scaling exists. Additionally, we can study and setup very exact physiological conditions in animals. For example, we can see how a certain drug will react under very specific circumstances with animals like mice. It is actually relatively easy to make mice that have missing or defective genes called "knockouts." This is very helpful for simulating certain conditions or processes as we can "eliminate" specific genes and then run tests. Clearly, doing so in humans would be subject to certain ethical and moral implications and would probably not make it through an institutional review board.

Link to comment
Share on other sites

Science is a method, a tool box so to speak. It is currently the only method that has reliably enabled us to understand the universe and make predictions with any degree of reliability. Unfortunately, a major issue is that many folks simply do not understand what they see when they look at a study. Honestly, many folks do not even look at the actual study, but rather quote "click bait" websites that reference said studies or at best, quote the study abstract. Rarely do I find folks talk about the methods and detailed discussion that revolve around collecting and processing data.

 

Looking at a study can be very difficult, even among folks who are educated and understand the field. Even worse is the fact that a little knowledge can be worse in some cases. For example, many learn that a p of less than 0.05 is good to go and may be prone to simply rely on that as their litmus for determining the validity of a study. This is myopic at best and a poor way to approach such a nuianced topic. For example, what does a p value really mean and how is it different from say, a confidence interval? Furthermore, a study design and protocols are also important to understand.

 

As stated, concepts such as sample size and being honest about study limitations are also important in addition to reproducible results that support or go against the null. It's also important to consider blinding, use of controls, sample selection and other concepts like study designs such as meta analyses, systemic reviews, case controlled studies, cohort studies, longitudinal studies and so on.

 

Often, I find what a study actually says differs from what is reported in the media. It's also important to know when to accept a hypothesis when large, multiple studies do not exist. An example in my field is a therapy known as lipid emulsion to treat overdose from local anaesthetics. There are animal studies and a bunch of case reports and anecdote in humans but no large, randomised, controlled, double blinded, multi center studies supporting lipid emulsion. However, the number of case reports are so compelling that the major players in the toxicology community agree that running a study where people are denied the only therapy that appears to be markedly effective would be equivalent to a moral and ethical catastrophe as many people would likely die in the process of collecting the data.

 

An example regarding a failure to show reproducibility is currently what we see occurring in psychology where many old studies though to be nearly untouchable are turning out to be completely non-reproducible, throwing many aspects of contemporary psychology into a state of uncertainty.

 

This puts the average Jane and Joe in a tough spot because making sense of this stuff is highly nuanced. This is still good as even the most seemingly robust ideas that come from science are tentative and must always be ready to stand the test of falsifiability. Even robust staples like general relativity are still being put to the test.

 

Agree 100%. Your post makes me realize a mistake that I made. I am going to correct myself of what I wrote in post #3. We must not trust (in the most general sense as in a blind trust that a baby has towards its mother) a scientific consensus rather we must accept the current scientific consensus until its limitation is discovered.

Link to comment
Share on other sites

Someone inform all those posting miracle weed cures on fb. They're starting to rival all the cat posts.

Link to comment
Share on other sites

 

Love that! He hits the nail on the head most times. This is what bugs me when people on ex-c scream science! science! Science/research is only as unbiased as the researchers doing it and those funding it. I always ask myself things like, what was the sample size, what does the data actually say (not just how it was interpreted), has the study been replicated and how many times has it been replicated, and who funded it. I never take any of it at face value.

 

I think you draw the incorrect conclusion from the video - part of it. In fact Oliver said "Just because a study is industry funded or its sample size was small or it was done on mice doesn’t mean it automatically flawed".

 

No, I did not say I would dismiss it, only that I would not take it at face value.  I would continue my line of questioning.  Specifically, was it replicated?  How many times?  Most importantly, was it replicated by a source outside the industry?  If so, and if every replication ended with the same results, then I would have no problem with it.  Unfortunately, it is not in the best interests of industries to have their studies replicated by outside sources.  Monsanto, for instance, is not going to spend enormous amounts of money and manpower creating a highly effective insecticide and then have their scientists tell them it kills bees.  Numerous outside studies are showing that it does just that, and Monsanto is continuing to deny it, claiming their studies don't show that.

Link to comment
Share on other sites

 

Science is a method, a tool box so to speak. It is currently the only method that has reliably enabled us to understand the universe and make predictions with any degree of reliability. Unfortunately, a major issue is that many folks simply do not understand what they see when they look at a study. Honestly, many folks do not even look at the actual study, but rather quote "click bait" websites that reference said studies or at best, quote the study abstract. Rarely do I find folks talk about the methods and detailed discussion that revolve around collecting and processing data.

 

Looking at a study can be very difficult, even among folks who are educated and understand the field. Even worse is the fact that a little knowledge can be worse in some cases. For example, many learn that a p of less than 0.05 is good to go and may be prone to simply rely on that as their litmus for determining the validity of a study. This is myopic at best and a poor way to approach such a nuianced topic. For example, what does a p value really mean and how is it different from say, a confidence interval? Furthermore, a study design and protocols are also important to understand.

 

As stated, concepts such as sample size and being honest about study limitations are also important in addition to reproducible results that support or go against the null. It's also important to consider blinding, use of controls, sample selection and other concepts like study designs such as meta analyses, systemic reviews, case controlled studies, cohort studies, longitudinal studies and so on.

 

Often, I find what a study actually says differs from what is reported in the media. It's also important to know when to accept a hypothesis when large, multiple studies do not exist. An example in my field is a therapy known as lipid emulsion to treat overdose from local anaesthetics. There are animal studies and a bunch of case reports and anecdote in humans but no large, randomised, controlled, double blinded, multi center studies supporting lipid emulsion. However, the number of case reports are so compelling that the major players in the toxicology community agree that running a study where people are denied the only therapy that appears to be markedly effective would be equivalent to a moral and ethical catastrophe as many people would likely die in the process of collecting the data.

 

An example regarding a failure to show reproducibility is currently what we see occurring in psychology where many old studies though to be nearly untouchable are turning out to be completely non-reproducible, throwing many aspects of contemporary psychology into a state of uncertainty.

 

This puts the average Jane and Joe in a tough spot because making sense of this stuff is highly nuanced. This is still good as even the most seemingly robust ideas that come from science are tentative and must always be ready to stand the test of falsifiability. Even robust staples like general relativity are still being put to the test.

 

Agree 100%. Your post makes me realize a mistake that I made. I am going to correct myself of what I wrote in post #3. We must not trust (in the most general sense as in a blind trust that a baby has towards its mother) a scientific consensus rather we must accept the current scientific consensus until its limitation is discovered.

 

Exactly. I would encourage everyone to read A Short History of Nearly Everything for several reasons, but one of the things I'm getting out of it is how continual testing of a theory and analyzing conclusions of experiments should be encouraged. For example, several noteworthy scientists have done experiments to analyze the age of the earth, and at the time of their "discoveries", it was believed that they had the final definitive answer. But then another person came along and used a different way of experimentation, and ended up with an older time frame. The current belief - 4 billion years old - has been around for quite some time, but I wouldn't be surprised if someone came along and made it even older through new testing.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines.