Using Python for Your Daily Work: Streamlining Your Workflow and Boosting Productivity

Charles Lo
9 min readFeb 15, 2023

--

Image source — https://www.teamly.com/blog/wp-content/uploads/2022/02/What-Are-the-Business-Process-Automation-Benefits.png

In this article, I would like to share some of my thoughts on how I actually use Python to boost my productivity during work.

Python is a versatile programming language that can be used in a wide range of fields, including data science, web development, automation, and more. While Python is often used for complex applications, it can also be used to simplify your daily workflow and increase your productivity. Here are some ways you can use Python to make your workday more efficient.

1. Automate Repetitive Tasks

Python is a great tool for automating repetitive tasks. You can use Python scripts to automate tasks such as sending emails, downloading files, and formatting data. For example, if you need to download a large number of files from the internet, you can write a Python script to automate the process. This can save you a significant amount of time and increase your productivity. Here are some examples of repetitive tasks that can be automated using Python:

  • File Management: If you regularly work with large sets of files or directories, you can use Python to automate tasks like renaming files, creating backups, moving files to different folders, and deleting unwanted files. Here’s an example code for renaming files in a directory:
import os

directory = '/path/to/directory'
for filename in os.listdir(directory):
if filename.endswith('.txt'):
os.rename(os.path.join(directory, filename), os.path.join(directory, 'new_' + filename))

This code renames all the text files in a directory by adding the prefix “new_” to the original filename.

  • Data Processing: If you work with data on a regular basis, Python can be used to automate tasks like data cleaning, merging, filtering, and summarizing. Here’s an example code for summarizing data in a CSV file:
import pandas as pd

data = pd.read_csv('data.csv')
summary = data.groupby('category')['value'].sum()
summary.to_csv('summary.csv')

This code reads a CSV file, groups the data by category, sums the values in each category, and writes the summary to a new CSV file. Of course, this kind of automation is not only limited by such simple process that I shared above, however, but these are also great examples for you to get started right away

  • Web Scraping: If you want to achieve access data via the web, without actually needed to access to the browser, start Copy&Paste into your note for example, this is something you can do
import requests
from bs4 import BeautifulSoup

url = 'https://www.example.com'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
data = soup.find_all('div', {'class': 'article'})

This code sends a request to a website, parses the HTML using the BeautifulSoup library, and extracts data from all the div tags with the class “article”.

Web scraping is a big topic, It’s hard to cover within 1 acticle or story, I will also give another great example later on this article, as well as on my future writing.

2. Data Analysis and Visualization

Python is a popular language for data analysis and visualization. You can use Python to analyze large datasets, create charts and graphs, and generate reports. Python has several libraries, such as Pandas and Matplotlib, that make data analysis and visualization much easier. In one of the scenarios that I got asked is when you work in marketing and are responsible for analyzing website traffic and user behavior data. You receive a weekly report in CSV format with data from Google Analytics, but it takes a lot of time to manually process the data and create visualizations for your team. You want to automate this process to save time and improve accuracy. A sample code would be

import pandas as pd
import matplotlib.pyplot as plt

# Load data from CSV file
df = pd.read_csv('weekly_report.csv')

# Filter data by date range
start_date = '2022-01-01'
end_date = '2022-01-07'
df = df[(df['date'] >= start_date) & (df['date'] <= end_date)]

# Group data by day and calculate total sessions and pageviews
df = df.groupby('date').agg({'sessions': 'sum', 'pageviews': 'sum'})

# Create line chart of sessions and pageviews over time
plt.plot(df.index, df['sessions'], label='Sessions')
plt.plot(df.index, df['pageviews'], label='Pageviews')
plt.xlabel('Date')
plt.ylabel('Count')
plt.title('Weekly Website Traffic')
plt.legend()
plt.show()

The sample code loads data from a CSV file containing weekly website traffic data from Google Analytics. It filters the data by a specified date range, groups the data by day, and calculates the total sessions and pageviews for each day. It then creates a line chart of sessions and pageviews over time using the Matplotlib library.

To use this code in your daily work, you can customize the date range and data columns to match your specific needs. You can also automate the data loading process using Python scripts or schedule the code to run automatically using a task scheduler.

Using Python for data analysis and visualization can save you time and improve the accuracy of your work by automating repetitive tasks and creating clear visualizations that communicate insights to your team.

3. Collaboration and Documentation

Python can be used for collaborative projects as well. You can use tools like Jupyter Notebook to share your Python code and results with others. Jupyter Notebook allows you to write and share live code, equations, visualizations, and narrative text. Additionally, you can use Python to create documentation for your projects. With tools like Sphinx, you can generate professional-looking documentation for your Python projects. For example, A team of developers is working on a software project and needs a central location to store and collaborate on project documentation.

  • Creating a Shared Google Drive Folder:
# Importing the relevant libraries
import os
from google.oauth2 import service_account
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError

# Setting up the credentials
creds = service_account.Credentials.from_service_account_file(
'path/to/credentials.json', scopes=['https://www.googleapis.com/auth/drive'])
# Creating a Drive API client
service = build('drive', 'v3', credentials=creds)
# Creating a folder in the shared drive
folder_metadata = {'name': 'Project Documentation', 'parents': ['shared-drive-folder-id'], 'mimeType': 'application/vnd.google-apps.folder'}
folder = service.files().create(body=folder_metadata, fields='id').execute()
print('Folder ID: %s' % folder.get('id'))
  • Creating a Template for Project Documentation:
# Importing the relevant libraries
import jinja2

# Setting up the Jinja2 environment
template_loader = jinja2.FileSystemLoader(searchpath='path/to/template/directory')
template_env = jinja2.Environment(loader=template_loader)
# Rendering the template
template = template_env.get_template('project_documentation_template.html')
rendered_template = template.render(project_name='Project X')
# Saving the rendered template to a file
with open('path/to/output/directory/project_documentation.html', 'w') as f:
f.write(rendered_template)
  • Uploading Project Documentation:
# Importing the relevant libraries
from googleapiclient.http import MediaFileUpload

# Uploading the project documentation to the shared drive folder
file_metadata = {'name': 'Project X Documentation', 'parents': [folder.get('id')]}
media = MediaFileUpload('path/to/output/directory/project_documentation.html', mimetype='text/html')
file = service.files().create(body=file_metadata, media_body=media, fields='id').execute()
print('File ID: %s' % file.get('id'))

Of course, there are many more enterprise-grade of tools such as Confluence, SharePoint etc. However, This Python solution demonstrates how to use the Google Drive API to create a shared folder for project documentation, a template using Jinja2, and upload the rendered template to the shared folder. This can also be a great way to improve collaboration and documentation for smaller software development teams.

4. Task Scheduling

Python can be used for scheduling tasks as well. You can use Python’s built-in scheduling module to schedule tasks such as backups, data imports, and other tasks that need to run on a regular basis. By scheduling tasks, you can ensure that important tasks are completed on time and increase your productivity.

Imagine that you work for a company that needs to update their financial data on a regular basis. The financial data is provided by a third-party API and needs to be processed and stored in a database. To automate this process, you decide to use Python’s task scheduling capabilities to run the update script at a specific time every day.

First, you would write a script to retrieve the financial data from the API, process it, and store it in the database. Next, you would use a task scheduling library like “schedule” to run this script at a specific time every day. Here is an example code:

import schedule
import time

def update_financial_data():
# retrieve financial data from API
data = get_financial_data()

# process and store data in database
process_and_store_data(data)

# schedule update task to run every day at 8:00am
schedule.every().day.at("08:00").do(update_financial_data)

# loop to keep the script running
while True:
schedule.run_pending()
time.sleep(1)

In this example, the “update_financial_data” function retrieves the financial data from the API and processes it before storing it in the database. The “schedule.every().day.at(“08:00”).do(update_financial_data)” line schedules the “update_financial_data” function to run every day at 8:00am. Finally, the “while True” loop is used to keep the script running and constantly check for scheduled tasks to run.

By using Python’s task scheduling capabilities, you can automate repetitive tasks like updating financial data, saving you time and increasing your daily work efficiency.

5. Web Scraping

Python is a powerful tool for web scraping. You can use Python to extract data from websites and save it in a format that can be used for analysis. This can be especially useful for businesses that need to monitor their competitors, track prices, or analyze customer reviews. Although I have already shared a sample above for you to get started, here is another one.

Let’s say you work for a marketing company and you’re responsible for tracking your clients’ competitors’ pricing on a regular basis. Instead of manually going to each competitor’s website and copying the prices into a spreadsheet, you can use Python and web scraping to automate the process.

First, you’ll need to identify the HTML tags and attributes that contain the relevant pricing information on each competitor’s website. You can do this by inspecting the HTML source code of the page using your browser’s developer tools.

An example of pricing tags

Once you’ve identified the tags and attributes, you can use a Python library like BeautifulSoup to extract the data from the website. Here’s some example code to get you started:

import requests
from bs4 import BeautifulSoup

url = 'https://www.example.com/pricing'
response = requests.get(url)

soup = BeautifulSoup(response.content, 'html.parser')

price_tags = soup.find_all('span', {'class': 'price'})

prices = [tag.text for tag in price_tags]

print(prices)

In this example, we’re using the requests library to retrieve the HTML content of the pricing page, and then passing that content to BeautifulSoup to parse the HTML and extract the pricing data. We're specifically looking for span tags with a class of "price", which we assume contain the relevant pricing information.

The find_all method returns a list of all matching HTML elements, which we then loop over and extract the text content using the text attribute. Finally, we print the list of prices.

You could then use this code to automatically retrieve pricing data from each competitor’s website on a regular basis, and integrate the data into your internal pricing tracking system. This would save you a significant amount of time and reduce the risk of errors compared to manually copying and pasting the data.

About the Author

Hi Medium Community, my name is Charles Lo and I’m a project manager and data manager at Luxoft. I’m passionate about technology and hold several certifications including Offensive Security Certified Professional, AWS Certified Solution Architect, Red Hat Certified Engineer, and PMP Project Management. I have years of experience working in the banking, automotive, and open-source industries and have gained a wealth of knowledge throughout my career.

As I continue on my Medium journey, I hope to share my experiences and help others grow in their respective fields. Whether it’s providing tips for project management, insights into data analytics, or sharing my passion for open-source technology, I look forward to contributing to the Medium community and helping others succeed.

--

--

Charles Lo
Charles Lo

Written by Charles Lo

⭐ Tech pro passionate about cloud computing, security, programming, open-source, and project management. Sharing expert insights to drive innovation and growth

No responses yet