Gun Control Research Analysis

What Do We Know About the Association Between Firearm Legislation and Firearm-Related Injuries

PUBLICATION: Epidemiologic Reviews
DATE: February, 2016
AUTHORS: Santaella-Tenorio, Cerdá, Villaveces, Galea
ORGANIZATION: Johns Hopkins Bloomberg School of Public Health


This is a review of literature spanning 64 years (1950 through 2014). The papers were drawn from three databases (PubMed, Scopus and Web of Knowledge) and exclusions made using Guide to Community Preventive Services.


  1. The databases come from the medical sciences field (PubMed), others that are heavily populated with non-criminology publications (Scopus) and social sciences (heavily on Web of Knowledge). The sources are light on criminology, though not exclusionary.
  2. The use of the Guide to Community Preventive Services is concerning. Their review processes “Use methods that address the specific needs of public health.”
    1. The “criteria for including and excluding studies” is not transparent. The selection table is references to a web site that obscures the data.
  3. Some routinely decried studies (Ayers/Donohue) are included as authoritative.
  4. Amalgamated U.S. and international papers, obscuring cultural differences.
  5. The authors admit to a number of problems with their review:
    1. Studies where simultaneous legislation (gun control plus “get tough on criminal” laws) obscure the cause of changes in violence.
    2. Limited confidence in studies due to design and execution.



Firearm Legislation and Firearm Mortality in the USA: a cross-sectional, state-level study

This study is famous for making the claim that nationalizing three gun control laws would reduce gun deaths by 90%.

DATE: March, 2016
AUTHORS: Kalesan, Mobily, Keiser, Fagan, Galea


An epidemiology approach to criminology and the alleged cause/effect of firearm legislation. Highly critiqued by many. In fact, an accompanying commentary in the same Lancet edition listed several methodology exceptions.


  1. Used epidemiology measures not used in criminology (specifically, incident rate rations as opposed to straight crime rates).
  2. Cross-sectional study using only one external control variable (unemployment rates). Cross sectional studies can be valid is a large number potential confounding variables). A time series study base before and after enactment of laws would have been robust.
  3. They used one year, and some have accused them of cherry picking the year (2009 for laws, 2010 for deaths) since crime data well before and after that year was available.
  4. State-level gun ownership data was not transparent. It was gathered by YouGov, an “international internet-based market research firm, headquartered in the UK”. A scan of their site failed to show this poll, how it was conducted and thus there is no transparency on ownership rates.
  5. There were passages in the text that were transparently anti-gun in nature, often echoing word-by-word memes from con control organizations. “Unfettered sales from unlicensed dealers”, a non-existent category. “Firearm ownership rates are reported to have fallen” which we debunked and showed gun ownership rates depend on residency status.
  6. A wild claim that a national universal background check law would lower gun death rates by 56% despite there being only a 10% total difference in firearm homicides in 2010 between states with and without universal background checks.
  7. The basic claim, that 90% of gun deaths could be eliminated via background checks for guns and ammo and firearm identification, is absurd since over 60% of gun deaths are by suicide, and nearly all of those are done with legally owned guns.
  8. Calculation modeling was used as a predictor of gun violence, a poorly documented “firearm risk profile” for each state, “predicted” how nationalizing laws would affect gun deaths.


Survey Research and Self-Defense Gun Use: An Examination of Extreme Overestimates

PUBLICATION: Journal of Criminal Law & Criminology
DATE: Summer 1997
AUTHORS: David Hemenway


This paper claims to refute the conclusions of Gary Kleck and Marc Gertz, whereby Kleck/Gertz concluded the now certified claim of 2.5 million defensive gun uses (DGUs) every year. Yet, Hemenway’s paper never makes a point. There is a lot of speculation, odd yet inappropriate examples, and outright inaccuracies. In the end though, no refute is actually made and no statistical proof is offered.


Hemenway is listed as a Professor of Health Policy at the Harvard School. This is another instance of medical academics dabbling inexpertly into criminology. Indeed, Hemenway inaccurately used an epidemiology truth table in this paper to incorrectly explain survey response accuracy methods. The major problems with this “refute” include:

  1. Many wild assumptions and assertions, most made without proof points (“… its conclusions cannot be accepted as valid”, “likelihood of [many different things]”, “interviewers presumably knew”, “interviewers appear to have …”, etc.)
  2. Inaccurate methods. Of note was the use of a 4×4 truth table to suggest inaccuracies in a 1×2 survey matrix (the 4×4 table is quite appropriate for epidemiology when medical tests can and do produce both false positives and negatives, but not in disproving a set of survey yes/no responses).
  3. Several inaccurate quotes or summaries of Kleck’s work (e.g. Kleck determined guns prevented 400,000 murders when in fact it was 400,000 serious violent crimes).
  4. Some incomprehensibly odd comparisons. The worst one was comparing possible over-reporting of DGUs to over-reporting of UFO encounters, ignoring the basic difference between an actual, experienced event and one requiring speculative interpretation (“Using a gun in self-defense, like having contact with an alien …”).
  5. An odd reliance on the National Crime Victimization Survey (NCVS), whose measure of DGUs is a low-side outlier when contrasted to criminology and media DGU surveys (it would appear that Hemenway wishes to believe the NCVS is accurate, despite the well-known reporting bias induced via personal interviews [face-to-face] in the respondent’s home).
  6. Though Hemenway is responding to a specific Kleck paper, he does not mention how in Kleck’s book Targeting Guns that the 2.5 million number is replicated in several other criminology and media surveys, and how the NCVS tallies are not.
  7. Hemenway ignores that DGUs prevent victimization, and relies instead on only counting DGUs that occurred during/after victimization. In short, he used oranges to talk about apples.
  8. In one very odd passage, he confuses individual DGUs with household DGUs, apparently not understanding the design of the original survey, which was to measure individual DGUs.


Household Firearm Ownership and Rates of Suicide Across the 50 United States

DATE: April 2007
AUTHORS: David Hemenway, Matthew Miller, Steven Lippmann, Deborah Azrael


This paper claims that states with higher hoursehold firearm ownership rates have higher suicides rates, and that the association is “significant”.


Foremost, the study is unbalanced. To get the numbers that compared 15 “high ownership” states with a mere six “low ownership” states. They did this to compare roughly the same gross number of people, but this leads to stacking the deck in terms of the type of state cultures.

The cultural aspect is important. Many of the “high ownership” states are very rural, low population density, and have strongly “independent” cultures – cultures where people make their own decisions, including life and death ones. The “high ownership” states included Wyoming, the Dakotas, Montana, Alaska, etc. It should be noted that some of these rural states (West Virginia being one) have unusual drug and alcohol abuse rates, which is a key determinant for suicides.

As important, of the mere six states in the “low ownership” set, the authors included Hawaii, which has a profoundly different culture and is unique by its geographic isolation.



Target on Trafficking – New York Crime Gun Analysis

PUBLICATION: Office of the Attorney General, State of New York
DATE: October 2016


Using Bureau of Alcohol, Tobacco and Firearms (BATF) crime gun trace data, along with questionable ranking systems and other methodology mistakes, the report attempted to identify several states other than New York as the source for crime guns. The report blames the bulk of New York’s crime gun sources on the mythical Iron Pipeline.


  • The report skips by the most obvious issue, which is that most New York State crime guns come from New York. According to the 2013 BATF “Firearms Recovered and Traced” report, New York was the source for 31% of all its crime guns, despite having very stringent gun control laws.
    • In fact, crime guns from all Iron Pipeline states were a mere 37% more than from New York alone.
    • Interestingly, the report claims that only 1/2 of crime guns come from out of state, which by proxy means just as many come from inside the state.
  • Aside: New York has a “universal” background check system, so it is surprising that the number of New York sourced crime guns is as high as it is, indicating that these laws do not work.
  • No attempt was made to pair guns from out of state with normal interstate relocation. In other words, many or most of these guns may have been legally acquired and brought into the state by people moving there.
  • The report uses an arbitrary scoring system to predict if the gun was purposefully trafficked. However …
    • No statistical justification was used when selecting the scoring ratios.
    • For one key measurement (date of retail), nearly half of the BATF trace record were not available, and another 12% of crime guns could not be linked to any state. In short, the report disposed of half the relevant data.
    • 80% of the score for an assumed trafficked gun is a short time to crime (time from retailing of the gun to recovery at a crime scene). This presents many problems:
      • There is no statistical justification for this being the proper weighting.
      • They cite an ancient study that covered only 27 cities as proof of this assumption, and that study did not conclusively show the that a short time to crime was proof of trafficking.
      • Only 10% of the weighting was given to “border crossing”, meaning the key element – interstate movement – was the lowest criteria.
      • In the report, they admit that 1/2 of the trace records lacked sufficient classification data, and those were not given a score. In other words, the report blindly dismissed 1/2 of the data.
  • Even though the report weighted time to crime heavily, the authors admit that a mere 19% of recovered crime guns fit their definition for a short import period. In other words, less than 1/5th of crime guns were allegedly trafficked.
  • The report admits that a small number of high density cities are responsible for nearly all traced crime guns, with NYC and the adjoining part of Long Island being the majority. But the report does not investigate why this very tight cluster is such a huge problem. If removed from the analysis, New York state’s numbers would likely equal national averages.
  • The authors discredit themselves by citing Michael Bloomberg’s Mayors Against Illegal Guns and the Law Center to Prevent Gun Violence activist organization as an authoritative source for defining “supplier” states.
    • The latter is most important. The report cites the LCAGV as providing an “objective measure of the strength of gun safety laws”. Yet the LCAGV does not cite their own scoring system in the scorecard for state gun laws. There’s is an arbitrary definition, and not at all “objective”.



School shootings during 2013–2015 in the USA

PUBLICATION: Injury Prevention
DATE: December 2016


Claims that states with firearm purchase background laws have a lower rate of school shootings.


  • Foremost, school shootings are pretty rare. The authors about 51 per year, and of course these include gang incidents. They may also have included incidents near (not on) campus.
  • The control variables (mental health expenditures, K-12 education expenditure) were unusual and not standard for typical criminology research (poverty rates, population density, etc.).
  • Used epidemiology statistics, not criminology statistics.
  • The authors note that “urbanicity” (the impact of living in urban areas) had a greater covarience to school shootings than negative association with background checks.


Handgun Legislation and Changes in Statewide Overall Suicide Rates

PUBLICATION: American Journal of Public Health Research
DATE: April 2017


Claims that states with “universal” background checks and mandatory waiting periods have lower overall suicide rates.


The authors performed a study, comparing all 50 states and the District of Columbia. They evaluated the change in suicide rates between 2013 and 2014 (one year). To quote their article, “… statewide suicide rate changes between 2013 and 2014 …”

  • Foremost, a one year period cannot predict time-series effects. This is a major defect.
  • Aligned with this is that most time series studies examine the before/after effects of a law. The authors admit “… To our knowledge, no states changed the status on any of these laws during 2013 and 2014.”
  • Though they tested with a significant number of variables, they omitted many known to be associated with suicide rates, such as economic situation changes (i.e., layoffs, bankruptcies, etc.), familial support and abandonment rates, mental health access, and more.
    • Quite oddly, they included “elevation above sea level” as a comparison variable.
    • Interestingly, the period under study was during recover from the Great Recession, a time of very uneven economic recovery. To not fully analyze the effect economic variable changes (or lack thereof) were having on suicide rates is serious.



Lead exposure at firing ranges—a review

PUBLICATION: Environmental Health
DATE: April, 2017
AUTHORS: Mark A. S. Laidlaw, Gabriel Filippelli, Howard Mielke, Brian Gulson, Andrew S. Ball


Claims that blood lead levels (BLL) in employees at firing ranges is dangerous.


  1. The BLL range for employees at United States firing ranges was between 16.7 and 30.3 µg/dL, which is within all regulatory safety ranges.
  2. Many of the high scores were for shooting instructors and their blood samples taken immediately after training (in other words, not a average daily BLL).
  3. The firing ranges examined included overseas where common building codes and material handling procedures vary.



Right-To-Carry Laws and Violent Crime: A Comprehensive Assessment Using Panel Data and a State-Level Synthetic Controls Analysis

PUBLICATION: Working paper
DATE: July 2017
AUTHORS: John J. Donohue, Abhay Aneja, and Kyle D. Weber


Claims Right To carry (RTC) laws increase violent crime from 13-15%.


  1. Uses mathematical modeling to “predict” what crime rates might have been without RTC.
  2. Uses limited pairing of non-RTC states (2-4 states) with study states.
  3. Control states often have no cultural, population or geographical similarity. For example, Texas was studies by comparing it with California, Nebraska and Wisconsin.
  4. Studied only aggregate violent crime, despite RTC being a public function and certain forms of violence are not generally public (i.e., rape).

In-State and Interstate Associations Between Gun Shows and Firearm Deaths and Injuries

PUBLICATION: Annals of Internal Medicine
DATE: October 24, 2017
AUTHORS: Matthay, Galin, Rudolph, Farkas, Wintemute, Ahern


Claims Nevada gun shows lead to more firearm injuries and deaths in California.


  1. Raw gun injury data (presented in the paper) shows no change in California gun injury/deaths.
  2. Even after seemingly inappropriate statistical “adjustments”, the change in rates of firearm death an injuries were tiny (rate ratio variances commonly of less than one percent).
  3. Use control variables (seasonality and at-risk populations without:
    1. Defining the criteria for these variables.
    2. Reporting their assumptions concerning these.
    3. Reporting the constants used, which had direct affects on their modeling.
  4. Only looked at if Nevada gun shows affected California gun injuries, but not the other way (California affecting Nevada).
  5. Does not verify that firearms acquired at the gun shows were used in homicides, suicides or accidents.
  6. Used “difference in differences” statistical modeling to simulate experimental data:
    1. Inappropriate for criminology review with “as occurred” data.
    2. Approach subject to a variety of biases.
    3. Lack of broad set of confounding variables used to test regressions.
  7. No controls for suspect injury demographics (alcoholics, gang members, mentally ill, etc.) who are more prone to firearm accidents and homicides.
  8. Did not appear to factor-in that Nevada gun shows disallow non-Nevada residents to exit guns shows with acquired arms, and that California FFLs are in attendance to facilitate California regulations on waiting periods, etc.

State Firearm laws and Interstate Firearm Deaths from Homicide and Suicide in the United States

PUBLICATION: JAMA Internal Medicine
DATE: March 12, 2018
AUTHORS: Kaufman, Morrison, Branas, Wiebe


Claims that “stronger” firearm laws are associated with lower suicide rates, TBD


  1. Paper “received a waiver of review” for unexplained reasons.
  2. Regardless of methodology concerns, the variability in their calculated Incidence Rate Ratios (IRRs) was
  3. The variance in IRR for “firearm” events and “non-firearm” events was commonly small (2-6% range).
  4. Inexplicably, they could not account for “causal relationship between state policies and firearm deaths.”
  5. Used distance as a proxy for firearm migration despite stating explicitly that the FBI notes this to not be a barrier (guns travel quite far as people move from state to state).
  6. Excluded Washington D.C. for no “applicable state laws” though city laws are well known.
  7. “Calculated counts of firearm homicides” instead of using the standard FBI UCR homicide tables.
  8. They began with an assumption of what firearm laws are associated with firearm death or interstate movement of firearms. 1
  9. They created their own scoring system for six categories of laws, and assigned points without scaling criteria.
  10. In many instances, their validation of efficacy came from irrational covariance assumptions (e.g., gun availability causes suicides).
  11. Made an incorrect assumption about interstate proximity in illegal/legal firearm migration.
  12. They clustered counties by the number of laws in effect, not their alleged efficacy (e.g., states with three marginal laws would be better that states with one strong law).
  13. Used epidemiology modeling instead standard statistical processes used in criminology.
  14. They had a poorly explained substitution of a model from 2010 with one from 2012
  15. They tested three different models, but published only on one.


  1. We will note, in passing, that the roster of laws by state we provided to them by a gun control advocacy group who publishes their own rating system for laws, which begs questions of legitimacy.