Transitioning From Excel To Python: Important Capabilities For search engine optimization Knowledge Evaluation

Studying to code, whether or not with Python, JavaScript, or one other programming language, has an entire host of advantages, together with the power to work with bigger datasets and automate repetitive duties.

However regardless of the advantages, many search engine optimization professionals are but to make the transition – and I utterly perceive why! It isn’t an important ability for search engine optimization, and we’re all busy individuals.

In case you’re pressed for time, and also you already know how one can accomplish a process inside Excel or Google Sheets, then altering tack can really feel like reinventing the wheel.

After I first began coding, I initially solely used Python for duties that I couldn’t accomplish in Excel – and it’s taken a number of years to get to the purpose the place it’s my defacto alternative for knowledge processing.

Trying again, I’m extremely glad that I continued, however at instances it was a irritating expertise, with many an hour spent scanning threads on Stack Overflow.

This put up is designed to spare different search engine optimization professionals the identical destiny.

Inside it, we’ll cowl the Python equivalents of probably the most generally used Excel formulation and options for search engine optimization knowledge evaluation – all of which can be found inside a Google Colab pocket book linked within the abstract.

Particularly, you’ll be taught the equivalents of:

  • LEN.
  • Drop Duplicates.
  • Textual content to Columns.
  • Discover and Change.
  • IF.
  • IFS.
  • Pivot Tables.

Amazingly, to perform all of this, we’ll primarily be utilizing a singular library – Pandas – with somewhat assist in locations from its massive brother, NumPy.


For the sake of brevity, there are some things we gained’t be overlaying right now, together with:

  • Putting in Python.
  • Fundamental Pandas, like importing CSVs, filtering, and previewing dataframes.

In case you’re uncertain about any of this, then Hamlet’s information on Python knowledge evaluation for search engine optimization is the proper primer.

Now, with out additional ado, let’s leap in.


LEN offers a rely of the variety of characters inside a string of textual content.

For search engine optimization particularly, a typical use case is to measure the size of title tags or meta descriptions to find out whether or not they’ll be truncated in search outcomes.

Inside Excel, if we needed to rely the second cell of column A, we’d enter:

Screenshot from Microsoft Excel, November 2022

Python isn’t too dissimilar, as we will depend on the inbuilt len operate, which could be mixed with Pandas’ loc[] to entry a particular row of knowledge inside a column:


On this instance, we’re getting the size of the primary row within the “Title” column of our dataframe.

len function python
Screenshot of VS Code, November, 2022

Discovering the size of a cell isn’t that helpful for search engine optimization, although. Usually, we’d need to apply a operate to a whole column!

In Excel, this may be achieved by choosing the method cell on the underside right-hand nook and both dragging it down or double-clicking.

When working with a Pandas dataframe, we will use str.len to calculate the size of rows inside a collection, then retailer the ends in a brand new column:

df['Length'] = df['Title'].str.len()

Str.len is a ‘vectorized’ operation, which is designed to be utilized concurrently to a collection of values. We’ll use these operations extensively all through this text, as they virtually universally find yourself being sooner than a loop.

One other frequent software of LEN is to mix it with SUBSTITUTE to rely the variety of phrases in a cell:


In Pandas, we will obtain this by combining the str.break up and str.len capabilities collectively:

df['No. Words'] = df['Title'].str.break up().str.len()

We’ll cowl str.break up in additional element later, however primarily, what we’re doing is splitting our knowledge based mostly upon whitespaces throughout the string, then counting the variety of element elements.

word count PythonScreenshot from VS Code, November 2022

Dropping Duplicates

Excel’s ‘Take away Duplicates’ characteristic offers a simple option to take away duplicate values inside a dataset, both by deleting fully duplicate rows (when all columns are chosen) or eradicating rows with the identical values in particular columns.

Excel drop duplicatesScreenshot from Microsoft Excel, November 2022

In Pandas, this performance is supplied by drop_duplicates.

To drop duplicate rows inside a dataframe kind:


To drop rows based mostly on duplicates inside a singular column, embrace the subset parameter:

df.drop_duplicates(subset="column", inplace=True)

Or specify a number of columns inside an inventory:

df.drop_duplicates(subset=['column','column2'], inplace=True)

One addition above that’s value calling out is the presence of the inplace parameter. Together with inplace=True permits us to overwrite our present dataframe while not having to create a brand new one.

There are, after all, instances once we need to protect our uncooked knowledge. On this case, we will assign our deduped dataframe to a unique variable:

df2 = df.drop_duplicates(subset="column")

Textual content To Columns

One other on a regular basis important, the ‘textual content to columns’ characteristic can be utilized to separate a textual content string based mostly on a delimiter, reminiscent of a slash, comma, or whitespace.

For example, splitting a URL into its area and particular person subfolders.

Excel drop duplicatesScreenshot from Microsoft Excel, November 2022

When coping with a dataframe, we will use the str.break up operate, which creates an inventory for every entry inside a collection. This may be transformed into a number of columns by setting the broaden parameter to True:

df['URL'].str.break up(pat="/", broaden=True)
str split PythonScreenshot from VS Code, November 2022

As is commonly the case, our URLs within the picture above have been damaged up into inconsistent columns, as a result of they don’t characteristic the identical variety of folders.

This could make issues difficult once we need to save our knowledge inside an present dataframe.

Specifying the n parameter limits the variety of splits, permitting us to create a particular variety of columns:

df[['Domain', 'Folder1', 'Folder2', 'Folder3']] = df['URL'].str.break up(pat="/", broaden=True, n=3)

Another choice is to make use of pop to take away your column from the dataframe, carry out the break up, after which re-add it with the be a part of operate:

df = a part of(df.pop('Break up').str.break up(pat="/", broaden=True))

Duplicating the URL to a brand new column earlier than the break up permits us to protect the total URL. We will then rename the brand new columns:🐆

df['Split'] = df['URL']

df = a part of(df.pop('Break up').str.break up(pat="/", broaden=True))

df.rename(columns = {0:'Area', 1:'Folder1', 2:'Folder2', 3:'Folder3', 4:'Parameter'}, inplace=True)
Split pop join functions PythonScreenshot from VS Code, November 2022


The CONCAT operate permits customers to mix a number of strings of textual content, reminiscent of when producing an inventory of key phrases by including completely different modifiers.

On this case, we’re including “mens” and whitespace to column A’s listing of product varieties:

=CONCAT($F$1," ",A2)
concat Excel
Screenshot from Microsoft Excel, November 2022

Assuming we’re coping with strings, the identical could be achieved in Python utilizing the arithmetic operator:

df['Combined] = 'mens' + ' ' + df['Keyword']

Or specify a number of columns of knowledge:

df['Combined'] = df['Subdomain'] + df['URL']
concat PythonScreenshot from VS Code, November 2022

Pandas has a devoted concat operate, however that is extra helpful when making an attempt to mix a number of dataframes with the identical columns.

As an illustration, if we had a number of exports from our favourite hyperlink evaluation device:

df = pd.read_csv('knowledge.csv')
df2 = pd.read_csv('data2.csv')
df3 = pd.read_csv('data3.csv')

dflist = [df, df2, df3]

df = pd.concat(dflist, ignore_index=True)


The SEARCH and FIND formulation present a approach of finding a substring inside a textual content string.

These instructions are generally mixed with ISNUMBER to create a Boolean column that helps filter down a dataset, which could be extraordinarily useful when performing duties like log file evaluation, as defined on this information. E.g.:

isnumber search ExcelScreenshot from Microsoft Excel, November 2022

The distinction between SEARCH and FIND is that discover is case-sensitive.

The equal Pandas operate, str.accommodates, is case-sensitive by default:

df['Journal'] = df['URL'].str.accommodates('engine', na=False)

Case insensitivity could be enabled by setting the case parameter to False:

df['Journal'] = df['URL'].str.accommodates('engine', case=False, na=False)

In both situation, together with na=False will stop null values from being returned throughout the Boolean column.

One large benefit of utilizing Pandas right here is that, in contrast to Excel, regex is natively supported by this operate – as it’s in Google sheets through REGEXMATCH.

Chain collectively a number of substrings by utilizing the pipe character, also referred to as the OR operator:

df['Journal'] = df['URL'].str.accommodates('engine|search', na=False)

Discover And Change

Excel’s “Discover and Change” characteristic offers a simple option to individually or bulk substitute one substring with one other.

find replace ExcelScreenshot from Microsoft Excel, November 2022

When processing knowledge for search engine optimization, we’re most certainly to pick out a whole column and “Change All.”

The SUBSTITUTE method offers an alternative choice right here and is helpful if you happen to don’t need to overwrite the prevailing column.

For example, we will change the protocol of a URL from HTTP to HTTPS, or take away it by changing it with nothing.

When working with dataframes in Python, we will use str.substitute:

df['URL'] = df['URL'].str.substitute('http://', 'https://')


df['URL'] = df['URL'].str.substitute('http://', '') # substitute with nothing

Once more, in contrast to Excel, regex can be utilized – like with Google Sheets’ REGEXREPLACE:

df['URL'] = df['URL'].str.substitute('http://|https://', '')

Alternatively, if you wish to substitute a number of substrings with completely different values, you should use Python’s substitute technique and supply an inventory.

This prevents you from having to chain a number of str.substitute capabilities:

df['URL'] = df['URL'].substitute(['http://', ' https://'], ['https://www.', 'https://www.’], regex=True)


Extracting a substring inside Excel requires the utilization of the LEFT, MID, or RIGHT capabilities, relying on the place the substring is situated inside a cell.

Let’s say we need to extract the basis area and subdomain from a URL:

left mid right ExcelScreenshot from Microsoft Excel, November 2022

Utilizing a mix of MID and a number of FIND capabilities, this method is ugly, to say the least – and issues get rather a lot worse for extra complicated extractions.

Once more, Google Sheets does this higher than Excel, as a result of it has REGEXEXTRACT.

What a disgrace that if you feed it bigger datasets, it melts sooner than a Babybel on a scorching radiator.

Fortunately, Pandas presents str.extract, which works in an analogous approach:

df['Domain'] = df['URL'].str.extract('.*://?([^/]+)')
str extract PythonScreenshot from VS Code, November 2022

Mix with fillna to stop null values, as you’ll in Excel with IFERROR:

df['Domain'] = df['URL'].str.extract('.*://?([^/]+)').fillna('-')


IF statements will let you return completely different values, relying on whether or not or not a situation is met.

As an instance, suppose that we need to create a label for key phrases which can be rating throughout the high three positions.

Excel IFScreenshot from Microsoft Excel, November 2022

Relatively than utilizing Pandas on this occasion, we will lean on NumPy and the the place operate (bear in mind to import NumPy, if you happen to haven’t already):

df['Top 3'] = np.the place(df['Position'] <= 3, 'High 3', 'Not High 3')

A number of situations can be utilized for a similar analysis by utilizing the AND/OR operators, and enclosing the person standards inside spherical brackets:

df['Top 3'] = np.the place((df['Position'] <= 3) & (df['Position'] != 0), 'High 3', 'Not High 3')

Within the above, we’re returning “High 3” for any key phrases with a rating lower than or equal to 3, excluding any key phrases rating in place zero.


Generally, moderately than specifying a number of situations for a similar analysis, you might have considered trying a number of situations that return completely different values.

On this case, the perfect resolution is utilizing IFS:

=IFS(B2<=3,"High 3",B2<=10,"High 10",B2<=20,"High 20")
IFS ExcelScreenshot from Microsoft Excel, November 2022

Once more, NumPy offers us with the perfect resolution when working with dataframes, through its choose operate.

With choose, we will create an inventory of situations, selections, and an non-compulsory worth for when all the situations are false:

situations = [df['Position'] <= 3, df['Position'] <= 10, df['Position'] <=20]

selections = ['Top 3', 'Top 10', 'Top 20']

df['Rank'] = np.choose(situations, selections, 'Not High 20')

It’s additionally doable to have a number of situations for every of the evaluations.

Let’s say we’re working with an ecommerce retailer with product itemizing pages (PLPs) and product show pages (PDPs), and we need to label the kind of branded pages rating throughout the high 10 outcomes.

The best resolution right here is to search for particular URL patterns, reminiscent of a subfolder or extension, however what if rivals have comparable patterns?

On this situation, we may do one thing like this:

situations = [(df['URL'].str.accommodates('/class/')) & (df['Brand Rank'] > 0),
(df['URL'].str.accommodates('/product/')) & (df['Brand Rank'] > 0),
(~df['URL'].str.accommodates('/product/')) & (~df['URL'].str.accommodates('/class/')) & (df['Brand Rank'] > 0)]

selections = ['PLP', 'PDP', 'Other']

df['Brand Page Type'] = np.choose(situations, selections, None)

Above, we’re utilizing str.accommodates to judge whether or not or not a URL within the high 10 matches our model’s sample, then utilizing the “Model Rank” column to exclude any rivals.

On this instance, the tilde signal (~) signifies a unfavourable match. In different phrases, we’re saying we would like each model URL that doesn’t match the sample for a “PDP” or “PLP” to match the factors for ‘Different.’

Lastly, None is included as a result of we would like non-brand outcomes to return a null worth.

np select PythonScreenshot from VS Code, November 2022


VLOOKUP is an important device for becoming a member of collectively two distinct datasets on a typical column.

On this case, including the URLs inside column N to the key phrase, place, and search quantity knowledge in columns A-C, utilizing the shared “Key phrase” column:

vlookup ExcelScreenshot from Microsoft Excel, November 2022

To do one thing comparable with Pandas, we will use merge.

Replicating the performance of an SQL be a part of, merge is an extremely highly effective operate that helps a wide range of completely different be a part of varieties.

For our functions, we need to use a left be a part of, which can preserve our first dataframe and solely merge in matching values from our second dataframe:

mergeddf = df.merge(df2, how='left', on='Key phrase')

One added benefit of performing a merge over a VLOOKUP, is that you just don’t should have the shared knowledge within the first column of the second dataset, as with the newer XLOOKUP.

It can additionally pull in a number of rows of knowledge moderately than the primary match in finds.

One frequent problem when utilizing the operate is for undesirable columns to be duplicated. This happens when a number of shared columns exist, however you try to match utilizing one.

To stop this – and enhance the accuracy of your matches – you’ll be able to specify an inventory of columns:

mergeddf = df.merge(df2, how='left', on=['Keyword', 'Search Volume'])

In sure eventualities, you might actively need these columns to be included. As an illustration, when making an attempt to merge a number of month-to-month rating experiences:

mergeddf = df.merge(df2, on='Key phrase', how='left', suffixes=('', '_october'))
    .merge(df3, on='Key phrase', how='left', suffixes=('', '_september'))

The above code snippet executes two merges to hitch collectively three dataframes with the identical columns – that are our rankings for November, October, and September.

By labeling the months throughout the suffix parameters, we find yourself with a a lot cleaner dataframe that clearly shows the month, versus the defaults of _x and _y seen within the earlier instance.

multi merge PythonScreenshot from VS Code, November 2022


In Excel, if you wish to carry out a statistical operate based mostly on a situation, you’re probably to make use of both COUNTIF, SUMIF, or AVERAGEIF.

Generally, COUNTIF is used to find out what number of instances a particular string seems inside a dataset, reminiscent of a URL.

We will accomplish this by declaring the ‘URL’ column as our vary, then the URL inside a person cell as our standards:

Excel countifScreenshot from Microsoft Excel, November 2022

In Pandas, we will obtain the identical consequence by utilizing the groupby operate:

Python groupbyScreenshot from VS Code, November 2022

Right here, the column declared throughout the spherical brackets signifies the person teams, and the column listed within the sq. brackets is the place the aggregation (i.e., the rely) is carried out.

The output we’re receiving isn’t good for this use case, although, as a result of it’s consolidated the information.

Usually, when utilizing Excel, we’d have the URL rely inline inside our dataset. Then we will use it to filter to probably the most incessantly listed URLs.

To do that, use rework and retailer the output in a column:

df['URL Count'] = df.groupby('URL')['URL'].rework('rely')
Python groupby transformScreenshot from VS Code, November 2022

It’s also possible to apply customized capabilities to teams of knowledge by utilizing a lambda (nameless) operate:

df['Google Count'] = df.groupby(['URL'])['URL'].rework(lambda x: x[x.str.contains('google')].rely())

In our examples to this point, we’ve been utilizing the identical column for our grouping and aggregations, however we don’t should. Equally to COUNTIFS/SUMIFS/AVERAGEIFS in Excel, it’s doable to group utilizing one column, then apply our statistical operate to a different.

Going again to the sooner search engine outcomes web page (SERP) instance, we could need to rely all rating PDPs on a per-keyword foundation and return this quantity alongside our present knowledge:

df['PDP Count'] = df.groupby(['Keyword'])['URL'].rework(lambda x: x[x.str.contains('/product/|/prd/|/pd/')].rely())
Python groupby countifsScreenshot from VS Code, November 2022

Which in Excel parlance, would look one thing like this:


Pivot Tables

Final, however on no account least, it’s time to speak pivot tables.

In Excel, a pivot desk is prone to be our first port of name if we need to summarise a big dataset.

As an illustration, when working with rating knowledge, we could need to determine which URLs seem most incessantly, and their common rating place.

pivot table ExcelScreenshot from Microsoft Excel, November 2022

Once more, Pandas has its personal pivot tables equal – but when all you need is a rely of distinctive values inside a column, this may be completed utilizing the value_counts operate:

rely = df['URL'].value_counts()

Utilizing groupby can also be an possibility.

Earlier within the article, performing a groupby that aggregated our knowledge wasn’t what we needed – but it surely’s exactly what’s required right here:

grouped = df.groupby('URL').agg(
     url_frequency=('Key phrase', 'rely'),
     avg_position=('Place', 'imply'),

groupby-pivot PythonScreenshot from VS Code, November 2022

Two combination capabilities have been utilized within the instance above, however this might simply be expanded upon, and 13 differing types can be found.

There are, after all, instances once we do need to use pivot_table, reminiscent of when performing multi-dimensional operations.

As an instance what this implies, let’s reuse the rating groupings we made utilizing conditional statements and try to show the variety of instances a URL ranks inside every group.

ranking_groupings = df.groupby(['URL', 'Grouping']).agg(
     url_frequency=('Key phrase', 'rely'),
python groupby groupingScreenshot from VS Code, November 2022

This isn’t the perfect format to make use of, as a number of rows have been created for every URL.

As an alternative, we will use pivot_table, which can show the information in several columns:

pivot = pd.pivot_table(df,
pivot table PythonScreenshot from VS Code, November 2022

Remaining Ideas

Whether or not you’re on the lookout for inspiration to start out studying Python, or are already leveraging it in your search engine optimization workflows, I hope that the above examples assist you alongside in your journey.

As promised, you’ll find a Google Colab pocket book with all the code snippets right here.

In fact, we’ve barely scratched the floor of what’s doable, however understanding the fundamentals of Python knowledge evaluation will provide you with a strong base upon which to construct.

Extra assets:

Featured Picture: mapo_japan/Shutterstock

Leave a Reply

Your email address will not be published. Required fields are marked *

Schedule Call

👋🏻 Hi friend, how are you today?

Need help? contact us here... 👇