The Pandas Library for Python

In fact, it's often helpful for beginners experienced with .

csv or excel files to think about how they would solve a problem in excel, and then experience how much easier it can be in Pandas.

So, without further ado, open your terminal, a text editor, or your favorite IDE, and take a look for yourself with the guidance below.

Example data:Take for example, some claims made against the TSA during a screening process of persons or a passenger’s property due to an injury, loss, or damage.

The claims data information includes claim number, incident date, claim type, claim amount, status, and disposition.

Directory: TSA Claims DataOur Data Download: claims-2014.

xlsSetupTo start off, let’s create a clean directory.

You can put this wherever you’d like, or create a project folder in an IDE.

Use your install method of choice to get Pandas: Pip is probably the easiest.

$ mkdir -p ~/Desktop/pandas-tutorial/data && cd ~/Desktop/pandas-tutorialInstall pandas along with xldr for loading Excel formatted files, matplotlib for plotting graphs, and Numpy for high-level mathematical functions.

$ pip3 install matplotlib numpy pandas xldrOptional: download the example data with curl:$ curl -O https://www.

dhs.

gov/sites/default/files/publications/claims-2014.

xlsLaunch Python:$ python3Python 3.

7.

1 (default, Nov 6 2018, 18:46:03)[Clang 10.

0.

0 (clang-1000.

11.

45.

5)] on darwinType "help", "copyright", "credits" or "license" for more information.

>>>Import packages:>>> import matplotlib.

pyplot as plt>>> import numpy as np>>> import pandas as pdLoading DataLoading data with Pandas is easy.

Pandas can accurately read data from almost any common format including JSON, CSV, and SQL.

Data is loaded into Pandas’ “flagship” data structure, the DataFrame.

That’s a term you’ll want to remember.

You’ll be hearing a lot about DataFrames.

If that term seems confusing — think about a table in a database, or a sheet in Excel.

The main point is that there is more than one column: each row or entry has multiple fields which are consistent from one row to the next.

You can load the example data straight from the web:>>> df = pd.

read_excel(io='https://www.

dhs.

gov/sites/default/files/publications/claims-2014.

xls', index_col='Claim Number')Less cooly, data can be loaded from a file:$ curl -O https://www.

dhs.

gov/sites/default/files/publications/claims-2014.

xls>>> df = pd.

read_excel(io='claims-2014.

xls', index_col='Claim Number')Basic OperationsPrint information about a DataFrame including the index dtype and column dtypes, non-null values, and memory usage.

DataFrame.

info() is one of the more useful and versatile methods attached to DataFrames (there are nearly 150!).

>>> df.

info()<class 'pandas.

core.

frame.

DataFrame'>Int64Index: 8855 entries, 2013081805991 to 2015012220083Data columns (total 10 columns):Date Received 8855 non-null datetime64[ns]Incident Date 8855 non-null datetime64[ns]Airport Code 8855 non-null objectAirport Name 8855 non-null objectAirline Name 8855 non-null objectClaim Type 8855 non-null objectClaim Site 8855 non-null objectItem Category 8855 non-null objectClose Amount 8855 non-null objectDisposition 8855 non-null objectdtypes: datetime64[ns](2), object(8)memory usage: 761.

0+ KBView the first n rows:>>> df.

info()<class '>>> df.

head(n=3) # see also df.

tail() Claim Number Date Received Incident Date Airport Code .

Claim Site Item Category Close Amount Disposition0 2013081805991 2014-01-13 2012-12-21 00:00:00 HPN .

Checked Baggage Audio/Video; Jewelry & Watches 0 Deny1 2014080215586 2014-07-17 2014-06-30 18:38:00 MCO .

Checked Baggage – 0 Deny2 2014010710583 2014-01-07 2013-12-27 22:00:00 SJU .

Checked Baggage Food & Drink 50 Approve in Full[3 rows x 11 columns]List all the columns in the DataFrame:>>> df.

columnsIndex(['Claim Number', 'Date Received', 'Incident Date', 'Airport Code', 'Airport Name', 'Airline Name', 'Claim Type', 'Claim Site', 'Item Category', 'Close Amount', 'Disposition'], dtype='object')Return a single column (important — also referred to as a Series):>>> df['Claim Type'].

head()0 Personal Injury1 Property Damage2 Property Damage3 Property Damage4 Property DamageName: Claim Type, dtype: objectHopefully, you’re starting to get an idea of what claims-2014.

xls’s data is all about.

Download Kite Free!The DtypeData types are a fundamental concept that you’ll want to have a solid grasp of in order to avoid frustration later.

Pandas adopts the nomenclature of Numpy, referring to a column’s data type as its dtype.

Pandas also attempts to infer dtypes upon DataFrame construction (i.

e.

initialization).

To take advantage of the performance boosts intrinsic to Numpy, we need to become familiar with these types, and learn about how they roughly translate to native Python types.

Look again at df.

info() and note the dtype assigned to each column of our DataFrame:>>> df.

info()<class 'pandas.

core.

frame.

DataFrame'>RangeIndex: 8855 entries, 0 to 8854Data columns (total 11 columns):Date Received 8855 non-null datetime64[ns]Incident Date 8855 non-null datetime64[ns]Airport Code 8855 non-null objectAirport Name 8855 non-null objectAirline Name 8855 non-null objectClaim Type 8855 non-null objectClaim Site 8855 non-null objectItem Category 8855 non-null objectClose Amount 8855 non-null objectDisposition 8855 non-null objectdtypes: datetime64[ns](2), object(8)memory usage: 761.

1+ KBdtypes are analogous to text/number format settings typical of most spreadsheet applications, and Pandas uses dtypes to determine which kind(s) of operations may be performed the data in a specific column.

For example, mathematical operations can only be performed on numeric data types such as int64 or float64.

Columns containing valid Dates and/or time values are assigned the datetime dtype and text and or binary data is assigned the catchall object dtype.

In short, Pandas attempts to infer dtypes upon DataFrame construction.

However, like many data analysis applications, the process isn’t always perfect.

It’s important to note that Pandas dtype inference errs on the side of caution: if a Series appears to contain more than one type of data, it’s assigned a catch-all dtype of ‘object’.

This behavior is less flexible than a typical spreadsheet application and is intended to ensure dtypes are not inferred incorrectly but also requires the analyst to ensure the data is “clean” after it’s loaded.

Cleansing and Transforming DataData is almost always dirty: it almost always contains some datum with atypical formatting; some artifact unique to its medium of origin.

Therefore, cleansing data is crucial to ensuring analysis derived therefrom is sound.

The work of cleansing with Pandas primarily involves identifying and re-casting incorrectly inferred dtypes.

>>> df.

dtypesDate Received datetime64[ns]Incident Date datetime64[ns]Airport Code objectAirport Name objectAirline Name objectClaim Type objectClaim Site objectItem Category objectClose Amount objectDisposition objectdtype: objectLooking again at our DataFrame’s dtypes we can see that Pandas correctly inferred the dtypes of Date Received and Incident Date as datetime64 dtypes.

Thus, datetime attributes of the column’s data are accessible during operations.

For example, to summarize our data by the hour of the day when each incident occurred we can group and summarize our data by the hour element of a datetime64 column to determine which hours of the day certain types of incidents occur.

>>> grp = df.

groupby(by=df['Incident Date'].

dt.

hour)>>> grp['Item Category'].

describe() count unique top freqIncident Date0 3421 146 Baggage/Cases/Purses 4891 6 5 Other 22 11 9 – 23 5 5 Jewelry & Watches 14 49 18 Baggage/Cases/Purses 65 257 39 – 336 357 54 – 437 343 43 Clothing 418 299 47 – 359 305 41 – 3110 349 45 Other 4311 343 41 – 4512 363 51 Other 4113 359 55 – 4514 386 60 Baggage/Cases/Purses 4915 376 51 Other 4116 351 43 Personal Electronics 3517 307 52 Other 3418 289 43 Baggage/Cases/Purses 3719 241 46 Baggage/Cases/Purses 2620 163 31 Baggage/Cases/Purses 2321 104 32 Baggage/Cases/Purses 2022 106 33 Baggage/Cases/Purses 1923 65 25 Baggage/Cases/Purses 14This works out quite perfectly — however, note that Close Amount was loaded as an ‘object’.

Words like “Amount” are a good indicator that a column contains numeric values.

Let’s take a look at the values in Close Amount.

>>> df['Close Amount'].

head()0 01 02 503 04 0Name: Close Amount, dtype: objectThose look like numeric values to me.

So let’s take a look at the other end>>> df['Close Amount'].

tail()8850 08851 8008852 08853 2568854 -Name: Close Amount, dtype: objectThere’s the culprit: index # 8854 is a string value.

If Pandas can’t objectively determine that all of the values contained in a DataFrame column are the same numeric or date/time dtype, it defaults to an object.

Luckily, I know from experience that Excel’s “Accounting” number format typically formats 0.

00 as a dash, -.

So how do we fix this?.Pandas provides a general method, DataFrame.

apply, which can be used to apply any single-argument function to each value of one or more of its columns.

In this case, we’ll use it to simultaneously convert the — to the value it represents in Excel, 0.

0 and re-cast the entire column’s initial object dtype to its correct dtype a float64.

First, we’ll define a new function to perform the conversion:>>> def dash_to_zero(x):>>> if '-' in str(x):>>> return float() # 0.

0>>> else:>>> return x # just return the input value as-isThen, we’ll apply the function to each value of Close Amount:>>> df['Close Amount'] = df['Close Amount'].

apply(dash_to_zero)>>> df['Close Amount'].

dtypedtype('float64')These two steps can also be combined into a single-line operation using Python’s lambda:>>> df['Close Amount'].

apply(lambda x: 0.

if '-' in str(x) else x)Download Kite Free!Performing Basic AnalysisOnce you’re confident that your dataset is “clean,” you’re ready for some data analysis!.Aggregation is the process of getting summary data that may be more useful than the finely grained values we are given to start with.

Calculations>>> df.

sum()Close Amount 538739.

51dtype: float64>>> df.

min()Date Received 2014-01-01 00:00:00Incident Date 2011-08-24 08:30:00Airport Code -Airport Name Albert J Ellis, JacksonvilleAirline Name -Claim Type -Claim Site -Item Category -Close Amount 0Disposition ->>> df.

max()Date Received 2014-12-31 00:00:00Incident Date 2014-12-31 00:00:00Airport Code ZZZAirport Name Yuma International AirportAirline Name XL AirwaysClaim Type Property DamageClaim Site OtherItem Category Travel Accessories; Travel AccessoriesClose Amount 25483.

4Disposition Settledtype: objectBooleansFind all of the rows where ‘Close Amount’ is greater than zero.

This is helpful because we’d like to see some patterins where the amount is actually positive, and show how conditional operators work.

>>> df[df['Close Amount'] > 0].

describe() Close Amountcount 2360.

000000mean 228.

279453std 743.

720179min 1.

25000025% 44.

47000050% 100.

00000075% 240.

942500max 25483.

440000GroupingIn this example, we’ll walk through how to group by a single column’s values.

The Groupby object is an intermediate step that allows us to aggregate on several rows which share something in common — in this case, the disposition value.

This is useful because we get a birds-eye view of different categories of data.

Ultimately, we use describe() to see several aggregates at once.

>>> grp = df.

groupby(by='Disposition')>>> grp.

describe() Close Amount count mean std min 25% 50% 75% maxDisposition- 3737.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00Approve in Full 1668.

0 158.

812116 314.

532028 1.

25 32.

9625 79.

675 159.

3375 6183.

36Deny 2758.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00Settle 692.

0 395.

723844 1268.

818458 6.

00 100.

0000 225.

000 425.

6100 25483.

44Group by multiple columns:>>> grp = df.

groupby(by=['Disposition', 'Claim Site'])>>> grp.

describe() Close Amount count mean std min 25% 50% 75% maxDisposition Claim Site- – 34.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00 Bus Station 2.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00 Checked Baggage 2759.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00 Checkpoint 903.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00 Motor Vehicle 28.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00 Other 11.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00Approve in Full Checked Baggage 1162.

0 113.

868072 192.

166683 1.

25 25.

6600 60.

075 125.

9825 2200.

00 Checkpoint 493.

0 236.

643367 404.

707047 8.

95 60.

0000 124.

000 250.

1400 6183.

36 Motor Vehicle 9.

0 1591.

428889 1459.

368190 493.

80 630.

0000 930.

180 1755.

9800 5158.

05 Other 4.

0 398.

967500 358.

710134 61.

11 207.

2775 317.

385 509.

0750 899.

99Deny – 4.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00 Checked Baggage 2333.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00 Checkpoint 407.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00 Motor Vehicle 1.

0 0.

000000 NaN 0.

00 0.

0000 0.

000 0.

0000 0.

00 Other 13.

0 0.

000000 0.

000000 0.

00 0.

0000 0.

000 0.

0000 0.

00Settle Checked Baggage 432.

0 286.

271968 339.

487254 7.

25 77.

0700 179.

995 361.

5700 2500.

00 Checkpoint 254.

0 487.

173031 1620.

156849 6.

00 166.

9250 281.

000 496.

3925 25483.

44 Motor Vehicle 6.

0 4404.

910000 7680.

169379 244.

00 841.

8125 1581.

780 2215.

5025 20000.

00PlottingWhile aggregates on groups of data is one of the best ways to get insights, visualizing data lets patterns jump out from the page, and is straightforward for those who aren’t as familiar with aggregate values.

Properly formatted visualizations are critical to communicating meaning in the data, and it’s nice to see that Pandas has some of these functions out of the box:>>> df.

plot(x='Incident Date', y='Close Amount')>>> plt.

show()Incident Date by Close AmountExporting Transformed DataFinally, we may need to commit either our original data, or the aggregates as a DataFrame to file format different than the one we started with, as Pandas does not limit you to writing back out to the same file format.

The most common flat file to write to from Pandas will be the .

csv.

From the visualization, it looks like the cost of TSA claims, while occasionally very high due to some outliers is improving in 2015.

We should probably recommend comparing staffing and procedural changes to continue in that direction, and explore in more detail why we have more incidents at certain times of day.

Like loading data, Pandas offers a number of methods for writing your data to file in various formats.

Writing back to an Excel file is slightly more involved than the others, so let’s write to an even more portable format: CSV.

To write your transformed dataset to a new CSV file:>>> df.

to_csv(path_or_buf='claims-2014.

v1.

csv')Final NotesHere we’ve seen a workflow that is both interesting and powerful.

We’ve taken a round-trip all the way from a government excel file, into Python, through some fairly powerful data visualization, and back to a .

csv file which could be more universally accessed — all through the power of Pandas.

Further, we’ve covered the three central objects in Pandas — DataFrames, Series, and dtypes.

Best of all, we have a deeper understanding of an interesting, real-world data set.

These are the core concepts to understand when working with Pandas, and now you can ask intelligent questions (of yourself, or of Google) about these different objects.

This TSA data use case has shown us exactly what Pandas is good for: the exploration, analysis, and aggregation of data to draw conclusions.

The analysis and exploration of data is important in practically any field, but it is especially useful to Data Scientists and AI professionals who may need to crunch and clean data in very specific, finely-grained ways, like getting moving averages on stock ticks.

Additionally, certain tasks may need to be automated, and this could prove difficult or expensive in sprawling applications like Excel, or Google Sheets, which may not offer all the functionality of Pandas with the full power of Python.

Just imagine telling a business administrator that they may never have to run that broken spreadsheet macro ever again!.Once analysis is automated, it can be deployed as a service or applied to hundreds of thousands of records streaming from a database.

Alternatively, Pandas could be used to make critical decisions after establishing statistical associations between patterns, as indeed it is every day.

Next, be sure to checkout at Python’s extensive database libraries (e.

g.

SQLalchemy), or API clients (like the Google Sheets/Slides Python Client or Airtable API to put your results in front of domain experts).

The possibilities are endless, and are only enhanced by Python’s mature libraries and active community.

.

. More details

Leave a Reply