Analyze A/B Test Results

From Ekofiongo Eale

GERMANY

This project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck!

Table of Contents

Introduction

A/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these

For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.

As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question. The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the RUBRIC.

Part I - Probability

To get started, let's import our libraries.

In [630]:
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
#We are setting the seed to assure you get the same answers on quizzes as we set up
random.seed(42)

1. Now, read in the ab_data.csv data. Store it in df. Use your dataframe to answer the questions in Quiz 1 of the classroom.

a. Read in the dataset and take a look at the top few rows here:

In [631]:
df = pd.read_csv('ab_data.csv')
df.head()
Out[631]:
user_id timestamp group landing_page converted
0 851104 2017-01-21 22:11:48.556739 control old_page 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0
3 853541 2017-01-08 18:28:03.143765 treatment new_page 0
4 864975 2017-01-21 01:52:26.210827 control old_page 1

b. Use the below cell to find the number of rows in the dataset.

In [632]:
df.shape
Out[632]:
(294478, 5)
In [633]:
df.shape[0]
Out[633]:
294478
I can also use the len function, and I would have the same result.
In [634]:
len(df)
Out[634]:
294478

c. The number of unique users in the dataset.

In order to get the unique value in the dataset, we have to use the function 'nunique' or use df.describe() with len function
In [635]:
df.user_id.unique()
Out[635]:
array([851104, 804228, 661590, ..., 734608, 697314, 715931], dtype=int64)
In [636]:
df.user_id.nunique()
Out[636]:
290584
We can also use describe function to finde an unique value
In [637]:
df.describe()

total_users = len(df.user_id.value_counts())
print(total_users)
290584
In [638]:
df.user_id.nunique()
Out[638]:
290584

d. The proportion of users converted.

In [639]:
df.converted.mean()
Out[639]:
0.11965919355605512
e. The number of times the new_page and treatment don't line up.
In [640]:
df[((df['group'] == 'treatment') == (df['landing_page'] == 'new_page')) == False].shape[0]
Out[640]:
3893

f. Do any of the rows have missing values?

There are no missing values in the dataset.
In [641]:
df.info()
sum(df.isnull().sum())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 294478 entries, 0 to 294477
Data columns (total 5 columns):
 #   Column        Non-Null Count   Dtype 
---  ------        --------------   ----- 
 0   user_id       294478 non-null  int64 
 1   timestamp     294478 non-null  object
 2   group         294478 non-null  object
 3   landing_page  294478 non-null  object
 4   converted     294478 non-null  int64 
dtypes: int64(2), object(3)
memory usage: 11.2+ MB
Out[641]:
0

2. For the rows where treatment is not aligned with new_page or control is not aligned with old_page, we cannot be sure if this row truly received the new or old page. Use Quiz 2 in the classroom to provide how we should handle these rows.

a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in df2.

In [642]:
df2 = df.drop(df[(df['group'] == 'treatment') & (df['landing_page'] == 'old_page')].index)
In [643]:
df2 = df2.drop(df2[(df2['group'] == 'control') & (df2['landing_page'] == 'new_page')].index)
checking the new (df2) size
In [644]:
df2.shape
Out[644]:
(290585, 5)
In [645]:
df2.head()
Out[645]:
user_id timestamp group landing_page converted
0 851104 2017-01-21 22:11:48.556739 control old_page 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0
3 853541 2017-01-08 18:28:03.143765 treatment new_page 0
4 864975 2017-01-21 01:52:26.210827 control old_page 1
In [646]:
df2.count()
Out[646]:
user_id         290585
timestamp       290585
group           290585
landing_page    290585
converted       290585
dtype: int64
In [647]:
# Checking all of the correct rows were removed - The result must be 0
df2[((df2.group == 'treatment') == (df2.landing_page == 'new_page')) == False].shape[0]
Out[647]:
0
Checking the new DataFrame
In [648]:
df2.head()
Out[648]:
user_id timestamp group landing_page converted
0 851104 2017-01-21 22:11:48.556739 control old_page 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0
3 853541 2017-01-08 18:28:03.143765 treatment new_page 0
4 864975 2017-01-21 01:52:26.210827 control old_page 1

3. Use df2 and the cells below to answer questions for Quiz3 in the classroom.

a. How many unique user_ids are in df2?

In [649]:
# Finding the number of unique users
df2.user_id.nunique()
Out[649]:
290584

b. There is one user_id repeated in df2. What is it?

In [650]:
# Finding the duplicate id
df2[df2.duplicated('user_id')]
Out[650]:
user_id timestamp group landing_page converted
2893 773192 2017-01-14 02:55:59.590927 treatment new_page 0

c. What is the row information for the repeat user_id?

In [651]:
df2[df2.duplicated(subset=['user_id'], keep='first')]
Out[651]:
user_id timestamp group landing_page converted
2893 773192 2017-01-14 02:55:59.590927 treatment new_page 0

d. Remove one of the rows with a duplicate user_id, but keep your dataframe as df2.

In [652]:
df2 = df2.drop_duplicates(subset=['user_id'], keep='first')

4. Use df2 in the below cells to answer the quiz questions related to Quiz 4 in the classroom.

a. What is the probability of an individual converting regardless of the page they receive?

As long as 1 is considered True, we do not need to specify the condition "converted == 1".
In [653]:
df2.converted.mean()
Out[653]:
0.11959708724499628
In [654]:
df2['converted'][df2['converted'] == 1].sum() / len(df2)
Out[654]:
0.11959708724499628

b. Given that an individual was in the control group, what is the probability they converted?

In [655]:
df2.query('group == "control"').converted.mean()
Out[655]:
0.1203863045004612

c. Given that an individual was in the treatment group, what is the probability they converted?

In [656]:
df2.query('group == "treatment"').converted.mean()
Out[656]:
0.11880806551510564

d. What is the probability that an individual received the new page?

In [657]:
len(df2.query('landing_page == "new_page"')) / len(df2)
Out[657]:
0.5000619442226688
In [658]:
### There are several ways to find that probability such as;
df2.query("landing_page == 'new_page'").shape[0] / df2.landing_page.shape[0]
Out[658]:
0.5000619442226688

e. Consider your results from a. through d. above, and explain below whether you think there is sufficient evidence to say that the new treatment page leads to more conversions.

Based on the above proportions, there is a small difference between the converted users in the treatment group and control group, so we cannot conclude that the new treatment page leads to more conversions. In addition, we can see that the number of people who converted from either group is almost identical, equivalent to 12% of each group. Therefore, there is no concrete evidence to suggest that those who explore either page will necessarily lead to more conversions.

Part II - A/B Test

Notice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed.

However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another?

These questions are the difficult parts associated with A/B tests in general.

1. For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of $p_{old}$ and $p_{new}$, which are the converted rates for the old and new pages.

If the conversion rate of the new page is higher than the conversion rate of the old page the null hypothesis will be rejected:
𝐻0: P𝑜𝑙𝑑 = > P𝑛𝑒𝑤
𝐻1:P𝑜𝑙𝑑 < P𝑛𝑒𝑤
So can our hypothesis test:
𝐻0: P𝑛𝑒𝑤−P𝑜𝑙𝑑≤0
𝐻1:P𝑛𝑒𝑤−P𝑜𝑙𝑑>0

In the previous section, we calculated the p-value, which is the probability of obtaining our statistic or a more extreme value if zero is true. Having a high p-value means that the statistic is more likely to come from our null hypothesis, so there is no statistical evidence to reject the null hypothesis that the old pages are the same or slightly better than the new ones

We can see that p_new and p_old are the average population values for new_page and old_page, respectively.

2. Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have "true" success rates equal to the converted success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the converted rate in ab_data.csv regardless of the page.

Use a sample size for each page equal to the ones in ab_data.csv.

Perform the sampling distribution for the difference in converted between the two pages over 10,000 iterations of calculating an estimate from the null.

Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use Quiz 5 in the classroom to make sure you are on the right track.

a. What is the convert rate for $p_{new}$ under the null?

In [659]:
#p_new = df2.converted.mean()
p_new = df2.query('landing_page == "new_page"').converted.mean()
p_new
Out[659]:
0.11880806551510564

b. What is the convert rate for $p_{old}$ under the null?

In [660]:
#p_old = df2.converted.mean()
p_old = df2.query('landing_page == "old_page"').converted.mean()
p_old
Out[660]:
0.1203863045004612
The difference between P_old & P_new
In [661]:
p_new - p_old
Out[661]:
-0.0015782389853555567

c. What is $n_{new}$?

In [662]:
n_new = df2['group'][df2['group'] == 'treatment'].count()
n_new
Out[662]:
145310

d. What is $n_{old}$?

In [663]:
n_old = len(df2.query('landing_page == "old_page"'))
n_old
Out[663]:
145274

e. Simulate $n_{new}$ transactions with a convert rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in new_page_converted.

In [664]:
new_page_converted = np.random.choice(2, size=n_new ,p=[p_new,1 - p_new])
new_page_converted.mean()
Out[664]:
0.8799050306241828

f. Simulate $n_{old}$ transactions with a convert rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in old_page_converted.

Using random choice to simulate n_old transaction with P_old propability for 1's and 1- P_old for 0's
In [665]:
old_page_converted = np.random.choice(2, size=n_old ,p=[p_old,1 - p_old])
old_page_converted.mean()
Out[665]:
0.8796618803089334

g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).

In [666]:
new_page_converted = new_page_converted[:145274]
new_page_converted.mean() - old_page_converted.mean()
Out[666]:
0.0002615746795709972

h. Simulate 10,000 $p_{new}$ - $p_{old}$ values using this same process similarly to the one you calculated in parts a. through g. above. Store all 10,000 values in a numpy array called p_diffs.

In [667]:
#Simulate 10000 samples of the differences in conversion rates
p_diffs = []

for _ in range(10000):
    new_page_converted = np.random.binomial(1, p_new, n_new)
    old_page_converted = np.random.binomial(1, p_old, n_old)
    new_page_p = new_page_converted.mean()
    old_page_p = old_page_converted.mean()
    p_diffs.append(new_page_p - old_page_p)

i. Plot a histogram of the p_diffs. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.

In [668]:
#Show the histogram
plt.hist(p_diffs);
lower, upper = np.percentile(p_diffs, 2.5), np.percentile(p_diffs, 97.5) 

plt.axvline(x=lower, color = "red");
plt.axvline(x=upper, color = "red");
plt.ylabel('Frequency', fontsize = 18);
plt.xlabel('Difference in Menas', fontsize = 18);
plt.title('P_Diff Plot', fontsize = 18);

j. What proportion of the p_diffs are greater than the actual difference observed in ab_data.csv?

In [669]:
actual_diff = df2.query('landing_page == "new_page"').converted.mean() - df2.query('landing_page == "old_page"').converted.mean()
p_diffs = np.array(p_diffs)
null_vals = np.random.normal(0, p_diffs.std(), p_diffs.size)

# plot null distribution
plt.hist(null_vals);

# plot line for observed statistic
plt.axvline(actual_diff, color = "red");
plt.ylabel('Frequency', fontsize = 18);
plt.xlabel('Difference in Menas', fontsize = 18);
plt.title('Null Values Plot & Means Actual Diff', fontsize = 18);
In [670]:
#difference of converted rates
actual_diff = (df2[df2['group'] == "treatment"]['converted'].mean()) - (df2[df2['group'] == "control"]['converted'].mean())
actual_diff
Out[670]:
-0.0015782389853555567
In [671]:
#Convert to numpy array and calculate the p-value
(null_vals > actual_diff).mean()
Out[671]:
0.905

k. In words, explain what you just computed in part j. What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?

In the previous section, we calculated the p-value, which is the probability of obtaining our statistic or a more extreme value if zero is true. Having a high p-value means that the statistic is more likely to come from our null hypothesis, so there is no statistical evidence to reject the null hypothesis that the old pages are the same or slightly better than the new ones.

l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let n_old and n_new refer the the number of rows associated with the old page and new pages, respectively.

In [672]:
import statsmodels.api as sm


#Number of conversions for each page
convert_old = df2.query('group == "control"').converted.sum()
convert_new = df2.query('group == "treatment"').converted.sum()

#Number of individuals who received each page
n_old = df2.query("landing_page == 'old_page'").shape[0]
n_new = df2.query("landing_page == 'new_page'").shape[0]
convert_old, convert_new, n_old, n_new
Out[672]:
(17489, 17264, 145274, 145310)

m. Now use stats.proportions_ztest to compute your test statistic and p-value. Here is a helpful link on using the built in.

In [673]:
z_score, p_value = sm.stats.proportions_ztest([convert_old, convert_new], [n_old, n_new], alternative='smaller')
z_score, p_value
Out[673]:
(1.3109241984234394, 0.9050583127590245)

n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts j. and k.?

In [674]:
from scipy.stats import norm
# how significant our z-score is
print(norm.cdf(z_score))

# for our single-sides test, assumed at 95% confidence level, we calculate: 
print(norm.ppf(1-(0.05/2)))
0.9050583127590245
1.959963984540054
The p-value 1.31 is similar to what we calculated before. The z-score represents the distance from the average. The z-score is where we are on the axis of a normal distribution. Using a normal distribution table, we find the p-value, which represents the area under the curve, saying that we have a 90% probability of observing the difference in the mean by chance. This therefore confirms the null hypothesis.

Part III - A regression approach

1. In this final part, you will see that the result you acheived in the previous A/B test can also be acheived by performing regression.

a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?

It's a logistic regression as we need to predict distinct value, no conversions which predicts a probability between 0 and 1

b. The goal is to use statsmodels to fit the regression model you specified in part a. to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create a column for the intercept, and create a dummy variable column for which page each user received. Add an intercept column, as well as an ab_page column, which is 1 when an individual receives the treatment and 0 if control.

In [675]:
df2['intercept'] = 1
df2['ab_page'] = pd.get_dummies(df2['group'])['treatment']
df2.head(3)
Out[675]:
user_id timestamp group landing_page converted intercept ab_page
0 851104 2017-01-21 22:11:48.556739 control old_page 0 1 0
1 804228 2017-01-12 08:01:45.159739 control old_page 0 1 0
2 661590 2017-01-11 16:55:06.154213 treatment new_page 0 1 1
In [ ]:
 

c. Use statsmodels to import your regression model. Instantiate the model, and fit the model using the two columns you created in part b. to predict whether or not an individual converts.

In [738]:
df2['intercept'] = 1
mod = sm.Logit(df2.converted, df2[['intercept', 'ab_page']])
results = mod.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366118
         Iterations 6
Out[738]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290582
Method: MLE Df Model: 1
Date: Sun, 10 May 2020 Pseudo R-squ.: 8.077e-06
Time: 19:58:23 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.1899
coef std err z P>|z| [0.025 0.975]
intercept -1.9888 0.008 -246.669 0.000 -2.005 -1.973
ab_page -0.0150 0.011 -1.311 0.190 -0.037 0.007
In [740]:
print(np.exp(res.params))
converted          0.985168
country_code_CA    1.012418
country_code_UK    0.991241
country_code_US    1.000034
intercept          1.003596
dtype: float64

d. Provide the summary of your model below, and use it as necessary to answer the following questions.

By executing the function 'np.exp(res.params', we find that the average reaches 0.98. This means that we are closer to reaching 1, which implies that the probability of having someone to convert is almost equal to the probability of not having someone to convert to the new_page. This confirms our null hypothesis and there is no evidence that the new_page is better.

e. What is the p-value associated with ab_page? Why does it differ from the value you found in Part II?

Hint: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in the Part II?

The result of the logit regression model, our intercept is 0.002 and the slope is -0.015. When using logit regression models, the focus is on the probability and likelihood of success. Running [np.exp(res.params)] will give us the coverage probability which is (0.98). The chances of conversion are therefore very close to 1, which means that the probability of having someone to convert is almost equal to the probability of not having someone to convert on the new_page. This confirms our null hypothesis and there is no evidence that the new_page is better. The p-value (0.1899) calculated by the logistic regression is the same as that calculated by the z-test function. Again, these two p-values are different from the one calculated in j and k parts, because we considered from the start that p_old and p_new are equal, which is not the case in the z-test and the logistic regression model. The p-value (0.1899) is high, confirming the failure to reject the null hypothesis.

f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?

Before drawing a good conclusion, we first need to see all the parameters, because several things can influence the conversion rate. We have to take into account the number of people who participated, was the duration of the test sufficient, in what period of time was the test organised? If the test was organised at the beginning of the week or at the weekend for example, this can influence the result of the test. The environment in which people live can also be important.

g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives. You will need to read in the countries.csv dataset and merge together your datasets on the approporiate rows. Here are the docs for joining tables.

Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - Hint: You will need two columns for the three dummy variables. Provide the statistical output as well as a written response to answer this question.

My answer to the question

I think countries have no impact on conversion. If you check the conversion rate for the 3 countries CA, UK and US, the result is almost the same.

The conversion ratio per country is less than

TURNOVER: 0.115318

UNITED KINGDOM: 0.120594

UNITED STATES: 0.119547

We need to dig deeper and provide more statistical answers to this question by integrating our data into a model and checking probabilities

In [722]:
countries_df = pd.read_csv('./countries.csv')
df_new = countries_df.set_index('user_id').join(df2.set_index('user_id'), how='inner')
In [723]:
### Checking the head(3)
df_new.head()
Out[723]:
country timestamp group landing_page converted intercept ab_page
user_id
834778 UK 2017-01-14 23:08:43.304998 control old_page 0 1 0
928468 US 2017-01-23 14:44:16.387854 treatment new_page 0 1 1
822059 UK 2017-01-16 14:04:14.719771 treatment new_page 1 1 1
711597 UK 2017-01-22 03:14:24.763511 control old_page 0 1 0
710616 UK 2017-01-16 13:14:44.000513 treatment new_page 0 1 1
countries data and store them into df_countries
In [724]:
countries_df.groupby('country').count()
Out[724]:
user_id
country
CA 14499
UK 72466
US 203619
Check number of unique rows under country column:
In [725]:
df_new.country.unique()
Out[725]:
array(['UK', 'US', 'CA'], dtype=object)
Considering there are three dummy variables, we will need to include two columns.
In [726]:
### Create the necessary dummy variables
df_new[['CA', 'US', 'UK']] = pd.get_dummies(df_new['country'])[['CA','US', 'UK']]
df_new.head()
Out[726]:
country timestamp group landing_page converted intercept ab_page CA US UK
user_id
834778 UK 2017-01-14 23:08:43.304998 control old_page 0 1 0 0 0 1
928468 US 2017-01-23 14:44:16.387854 treatment new_page 0 1 1 0 1 0
822059 UK 2017-01-16 14:04:14.719771 treatment new_page 1 1 1 0 0 1
711597 UK 2017-01-22 03:14:24.763511 control old_page 0 1 0 0 0 1
710616 UK 2017-01-16 13:14:44.000513 treatment new_page 0 1 1 0 0 1
Computing the statistical output:
In [727]:
log_mod = sm.Logit(df_new['converted'], df_new[['intercept', 'UK', 'US']])

results = log_mod.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366116
         Iterations 6
Out[727]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290581
Method: MLE Df Model: 2
Date: Sun, 10 May 2020 Pseudo R-squ.: 1.521e-05
Time: 19:42:16 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.1984
coef std err z P>|z| [0.025 0.975]
intercept -2.0375 0.026 -78.364 0.000 -2.088 -1.987
UK 0.0507 0.028 1.786 0.074 -0.005 0.106
US 0.0408 0.027 1.518 0.129 -0.012 0.093
According to our statistical output the p-value for both countries yields a value larger than 0.05; hence, there is no statistical evidence on country's significant impact on conversion.
Counting the conversion rate for each country
In [728]:
df_new.query('country == "US"').converted.mean(),df_new.query('country == "UK"').converted.mean(),df_new.query('country == "CA"').converted.mean()
Out[728]:
(0.1195468006423762, 0.12059448568984076, 0.11531829781364232)
In [729]:
df_new.groupby('country').mean()
Out[729]:
converted intercept ab_page CA US UK
country
CA 0.115318 1.0 0.503552 1.0 0.0 0.0
UK 0.120594 1.0 0.498247 0.0 0.0 1.0
US 0.119547 1.0 0.500459 0.0 1.0 0.0

h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model.

Provide the summary results, and your conclusions based on the results.

In [741]:
### Fit Your Linear Model And Obtain the Results
df_new['intercept'] = 1
log_mod = sm.Logit(df_new['converted'], df_new[['CA', 'US', 'intercept', 'ab_page']])
results = log_mod.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366113
         Iterations 6
Out[741]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290580
Method: MLE Df Model: 3
Date: Sun, 10 May 2020 Pseudo R-squ.: 2.323e-05
Time: 20:01:36 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.1760
coef std err z P>|z| [0.025 0.975]
CA -0.0506 0.028 -1.784 0.074 -0.106 0.005
US -0.0099 0.013 -0.743 0.457 -0.036 0.016
intercept -1.9794 0.013 -155.415 0.000 -2.004 -1.954
ab_page -0.0149 0.011 -1.307 0.191 -0.037 0.007
Creating the new additional columns cilumns US_ab_page and CA_ab_page, to define interaction variables that indicate an interaction between countries and ab_page.
In [742]:
df_new['US_ab_page'] = df_new['US'] * df_new['ab_page']
df_new['CA_ab_page'] = df_new['CA'] * df_new['ab_page']
In [743]:
df_new.info()
sum(df.isnull().sum())
<class 'pandas.core.frame.DataFrame'>
Int64Index: 290584 entries, 834778 to 934996
Data columns (total 12 columns):
 #   Column        Non-Null Count   Dtype 
---  ------        --------------   ----- 
 0   country       290584 non-null  object
 1   timestamp     290584 non-null  object
 2   group         290584 non-null  object
 3   landing_page  290584 non-null  object
 4   converted     290584 non-null  int64 
 5   intercept     290584 non-null  int64 
 6   ab_page       290584 non-null  uint8 
 7   CA            290584 non-null  uint8 
 8   US            290584 non-null  uint8 
 9   UK            290584 non-null  uint8 
 10  US_ab_page    290584 non-null  uint8 
 11  CA_ab_page    290584 non-null  uint8 
dtypes: int64(2), object(4), uint8(6)
memory usage: 17.2+ MB
Out[743]:
0
In [744]:
df_new.country.unique()
Out[744]:
array(['UK', 'US', 'CA'], dtype=object)
In [736]:
log_mod = sm.Logit(df_new['converted'], df_new[['intercept', 'CA', 'US']])

results = log_mod.fit()
results.summary()
Optimization terminated successfully.
         Current function value: 0.366116
         Iterations 6
Out[736]:
Logit Regression Results
Dep. Variable: converted No. Observations: 290584
Model: Logit Df Residuals: 290581
Method: MLE Df Model: 2
Date: Sun, 10 May 2020 Pseudo R-squ.: 1.521e-05
Time: 19:47:02 Log-Likelihood: -1.0639e+05
converged: True LL-Null: -1.0639e+05
Covariance Type: nonrobust LLR p-value: 0.1984
coef std err z P>|z| [0.025 0.975]
intercept -1.9868 0.011 -174.174 0.000 -2.009 -1.964
CA -0.0507 0.028 -1.786 0.074 -0.106 0.005
US -0.0099 0.013 -0.746 0.456 -0.036 0.016
According to our statistical output the p-value for both countries yields a value larger than 0.05; hence, there is no statistical evidence on country's significant impact on conversion.

Getting the odds

In [720]:
print(np.exp(res.params))
converted          0.985168
country_code_CA    1.012418
country_code_UK    0.991241
country_code_US    1.000034
intercept          1.003596
dtype: float64

Running [np.exp(res.params)] will give us the following odds:

converted 0.985168

country_code_CA 1.012435

country_code_UK 0.991257

country_code_US 1.000050

The odds of having someone to convert from each of the 3 countries are almost equal to 1, which means that the odds of having someone to convert from any of the 3 given countries are equal. Thus, the countries as a factor have no impact on the conversion rate.

Again, we cannot reject our null hypothesis, even considering countries as a factor.

Conclusions

Congratulations on completing the project!

Gather Submission Materials

Once you are satisfied with the status of your Notebook, you should save it in a format that will make it easy for others to read. You can use the File -> Download as -> HTML (.html) menu to save your notebook as an .html file. If you are working locally and get an error about "No module name", then open a terminal and try installing the missing module using pip install <module_name> (don't include the "<" or ">" or any words following a period in the module name).

You will submit both your original Notebook and an HTML or PDF copy of the Notebook for review. There is no need for you to include any data files with your submission. If you made reference to other websites, books, and other resources to help you in solving tasks in the project, make sure that you document them. It is recommended that you either add a "Resources" section in a Markdown cell at the end of the Notebook report, or you can include a readme.txt file documenting your sources.

Submit the Project

When you're ready, click on the "Submit Project" button to go to the project submission page. You can submit your files as a .zip archive or you can link to a GitHub repository containing your project files. If you go with GitHub, note that your submission will be a snapshot of the linked repository at time of submission. It is recommended that you keep each project in a separate repository to avoid any potential confusion: if a reviewer gets multiple folders representing multiple projects, there might be confusion regarding what project is to be evaluated.

It can take us up to a week to grade the project, but in most cases it is much faster. You will get an email once your submission has been reviewed. If you are having any problems submitting your project or wish to check on the status of your submission, please email us at dataanalyst-project@udacity.com. In the meantime, you should feel free to continue on with your learning journey by beginning the next module in the program.

In [ ]: