Welcome!

Apache Authors: Elizabeth White, Pat Romanski, Liz McMillan, Christopher Harrold, Janakiram MSV

Related Topics: @DXWorldExpo, @CloudExpo, Apache

@DXWorldExpo: Blog Feed Post

The 'Thinking' Part of 'Thinking Like a Data Scientist' | @BigDataExpo #BigData

Unfortunately, people have a tendency to blindly trust a claim from any source that they deem credible

Imagine my surprise when reading the March 28, 2016 issue of BusinessWeek and stumbling across the article titled “Lies, Damned Lies, and More Statistics.” In the article, "BusinessWeek" warned readers to beware of “p-hacking,” which is the statistical practice of tweaking data in ways that generate low p-values but actually undermine the test (see p-value definition below). One of the results of “p-hacking” is that absurd results can be made to pass the p-value test, and important findings can be overlooked. For example…

A study from the Pennington Biomedical Research Center in Baton Rouge[1] followed 17,000 Canadians over 12 years and found that those who sat for most of the day were 54% more likely to die of heart attacks that those that didn’t.

54%!? Yikes, that’s a scary fact. Proof that sitting kills you by heart attack. As a person who spends a lot of time sitting behind a desk, or on an airplane, or at sporting events, this “54% more likely to die of heart attacks” fact is very concerning.  Can I cheat certain death by throwing out my current desk and buying one of those expensive “stand up” work desks? Sounds like a bargain.

But the BusinessWeek article concludes with the statement “… hold findings to a higher standard if they conflict with common sense.” Bottom-line: think!

Unfortunately, people have a tendency to blindly trust a claim from any source that they deem credible, even if it completely conflicts with their own experiences, or common sense.

It only takes a couple stats and lack of common sense to make a dangerous conclusion and claim it’s a fact. It’s harder to buy a gun in Illinois than most other states. Gun-related murder rates are higher in Illinois than most other states. So… we can conclude that stiffer gun laws cause murder. Right? No, we can’t conclude that from those stats.

But I started to think, and challenge the assumption that there is some sort of causality between sitting and heart attacks. Some questions that immediately popped to mind included:

  • Are there other variables, like lack of exercise or eating habits or age, which might be the cause of the heart attacks?
  • Was a control group used to test the validity of the study results?
  • Is there something about Canadians that makes them more susceptible to sitting and heart attacks?
  • Who sponsored this study? Maybe the manufacturer of these new expensive “stand up and work”-type desks?

One needs to be a bit skeptical when they hear these sorts of “factoids.” We should know better than to just believe these sorts of claims blindly.

Let’s use this to remind ourselves to think before jumping to conclusion. And this is a great opportunity to employ our “thinking like a data scientist” techniques to identify what other variables might contribute to this “54% more likely to die of heart attacks” observation. In particular, this is an opportunity to test the “By Analysis” to explore what other variables we might want to consider. To perform the “By Analysis,” let’s craft the statement against which we want to apply this technique:

“I want understand details on each of the study’s participants by…”

Here are some of the variables and metrics that we could test to see if they might be predictors of heart attacks:

Age

Gender

Health history

Critical health variables (e.g., weight, height, sedentary heart rate, active heart rate, BMI, LDL)

Cholesterol history

Exercise history

Historical exercise results

Family health history

Hours worked per day

Hours worked per week

Marital status

Divorced?

Number of dependents

Ages of dependents

Diet

 

Date of most recent vacation

Recent vacation location

Number of vacation days

Home location

Home weather

Work location

Work weather

Amount of airplane travel

Amount of car travel

Length of job commute

Job Stress

Job title

Life Stress

Years until retirement

Retirement readiness

Financial status

 

Upon further analysis, we start to see groupings of related variables and metrics around specific use cases including:

  • Lifestyle (age, gender, Body Mass Index, cholesterol levels, blood pressure, naps, hours of sleep, etc.)
  • Diet (calories, fat intake, sugar intake, alcohol, smoking, amount of fish, organic foods, etc.)
  • Exercise (frequency of exercise, recency of exercise, type of exercise, level of effort, heart rate, etc.)
  • Work (hours of work, days of work per week, stress levels, amount of airline travel, managing people, etc.)
  • Vacation (recency, frequency, days of vacation, location of vacation, vacation activities)
  • Environmental (number of sunny days, number of rainy days, number of grey days, range of temperatures, range of humidity, local traffic congestion, population density, amount of local green space, local entertainment, etc.)

And there are likely others that we would want to identify and test with respect to those variables and metrics ability to predict heart attacks.

The goal is to use these techniques to identify which metrics and variables are the best predictors of performance, and if you are doing that, you’re thinking like a data scientist!

If you are interested in learning more about this “Thinking Like A Data Scientist” process, join me at EMC World in Las Vegas (that should set off your stress alerts), on Monday, May 2, 12:00 PM – 1:00 PM. I promise not to stress you out!

Optional Reading: Hypothesis Testing, Null Hypothesis and p-values
Nerd Warning! I’m going to try to explain the p-value, but to do that I also need to explain the concepts of hypothesis testing and null hypothesis.

A hypothesis test is a statistical test to determine whether there is enough evidence in a sample of data to infer that a certain condition is true for the entire population. For example, the hypothesis to test is whether test group A had better cancer recovery results than test group B due to medication X. A hypothesis test examines two opposing hypotheses about a population: the null hypothesis and the alternative hypothesis.

The null hypothesis is the hypothesis that there is “no effect” or “no difference” between the test groups. The null hypothesis is a test that there is no significant difference between the test groups, and that any observed difference between the test groups is due to sampling or experimental error.

The alternative hypothesis (that there exists an effect or a difference between the test groups) is the hypothesis you want to be able to conclude is true.

You use a p-value to make the determination to reject the null hypothesis. If the p-value is less than or equal to the level of significance, which is a cut-off point that you define, then you can reject the null hypothesis. The smaller the p-value, the stronger the evidence to reject the null hypothesis (i.e., no significant difference between test groups) in favor of the alternative hypothesis (i.e., there is significant difference between test groups).

A common misconception is that statistical hypothesis tests are designed to select the more likely of two hypotheses. Instead, a hypothesis test only tests whether to reject the null hypothesis.

By the way, even if we “fail to reject” the null hypothesis, it does not mean the null hypothesis is true. That’s because a hypothesis test does not determine which hypothesis is true; it only assesses whether available evidence exists to reject the null hypothesis[2].

Confusing? Yea, that’s what I think as well.

The Minitab Blog was a great source for much of the above data (http://blog.minitab.com)

[1] Sources: “Cutting Daily Sitting Time to Under 3 Hours Might Extend Life by Two Years; Watching TV for Less Than 2 Hours a Day Might Add Extra 1.4 Years”, July 10, 2012 and “Sedentary behaviour and life expectancy in the USA: a cause-deleted life table analysis” BJ Open Accessible Medical Research

[2] Check out “Bewildering Things Statisticians Say: “Failure to Reject the Null Hypothesis” for more details on failing to reject the null hypothesis.

The post The “Thinking” Part of “Thinking Like A Data Scientist” appeared first on InFocus.

Read the original blog entry...

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Hitachi Vantara as CTO, IoT and Analytics.

Previously, as a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...