Intermediate Python Programming

From Grundy
Jump to navigation Jump to search

For many developers picking up the basics of Python is an easy part. After they’ve mastered the basics of Python, a whole new world opens up! And suddenly it’s all about learning the libraries, frameworks, and best practices that surround Python.

Once you get a feeling that you have just scratched the surface of "What can I do with Python?", this tutorial will help you dive deeper into some of the concepts & libraries that will prove to be extremely useful in your journey of developing applications in Python. See the following sections for various different applications.

The Concept of Inheritance

After getting an overview of classes & objects, which form the basic building blocks of an Object Oriented Programming language like Python, we will now explore another essential concept of OOP known as Inheritance, & its implementation in Python. Inheritance models what is called an is-a relationship. This means that when you have a Derived class that inherits from a Base class, you created a relationship where Derived is a specialized version of Base.

For example, consider a base class Bird and you derive from it to create a Parrot class. The inheritance relationship states that a Parrot is a Bird. This means that Parrot inherits the interface and implementation of Bird, and Parrot objects can be used to replace Bird objects in the application.

Inheritance in Python

To demonstrate inheritance in python, we will use the example of the Human class from the Python for Beginners tutorial & extend it. Consider the class as shown here:

 1 class Human:
 2     species = "H. sapiens"
 3 
 4     def __init__(self, name):
 5         self.name = name
 6         self._age = 0
 7 
 8     def say(self, msg):
 9         print("{name}: {message}".format(name=self.name, message=msg))
10 
11     def sing(self):
12         return 'yo... yo... microphone check... one two... one two...'
13 
14     @classmethod
15     def get_species(cls):
16         return cls.species
17 
18     @staticmethod
19     def grunt():
20         return "*grunt*"
21 
22     @property
23     def age(self):
24         return self._age
25 
26     @age.setter
27     def age(self, age):
28         self._age = age
29 
30     @age.deleter
31     def age(self):
32         del self._age

Using the Human class defined above as the base or parent class, we can define a child class, Superhero, which inherits the class variables like "species", "name", and "age", as well as methods, like "sing" and "grunt" from the Human class, but can also have its own unique properties.

33 # Specify the parent class(es) as parameters to the class definition
34 class Superhero(Human):
35 
36     # Child classes can override their parents' attributes
37     species = 'Superhuman'
38 
39     # Children automatically inherit their parent class's constructor including its arguments, but can also define additional arguments or definitions
40     def __init__(self, name, movie=False, superpowers=["super strength", "bulletproofing"]):
41         # add additional class attributes:
42         self.fictional = True
43         self.movie = movie
44         # be aware of mutable default values, since defaults are shared
45         self.superpowers = superpowers
46 
47         # The "super" function lets you access the parent class's methods that are overridden by the child, in this case, the __init__ method.
48         # This calls the parent class constructor:
49         super().__init__(name)
50 
51     # override the sing method
52     def sing(self):
53         return 'Dun, dun, DUN!'
54 
55     # add an additional instance method
56     def boast(self):
57         for power in self.superpowers:
58             print("I wield the power of {pow}!".format(pow=power))
59 
60 # Now we create object of this inherited class & call its functions
61 if __name__ == '__main__':
62     sup = Superhero(name="Tick")
63 
64     # Instance type checks
65     if isinstance(sup, Human):
66         print('I am human')
67     if type(sup) is Superhero:
68         print('I am a superhero')
69 
70     # Calls parent method but uses its own class attribute
71     print(sup.get_species())    # => Superhuman
72 
73     # Calls overridden method
74     print(sup.sing())           # => Dun, dun, DUN!
75 
76     # Calls method from Human
77     sup.say('Spoon')            # => Tick: Spoon
78 
79     # Call method that exists only in Superhero
80     sup.boast()                 # => I wield the power of super strength!
81                                 # => I wield the power of bulletproofing!
82 
83     # Inherited class attribute
84     sup.age = 31
85     print(sup.age)              # => 31

Multiple Inheritance

Just like in the previous example, the Superhero class inherited from the Human class, similarly we can have one class inheriting its properties from several Base classes. This is called Multiple Inheritance. We demonstrate this concept by creating a Bat class & a Batman class which inherits from both: Superhero & Bat classes:

 86 # Define the Bat class
 87 class Bat:
 88     species = 'Baty'
 89     def __init__(self, can_fly=True):
 90         self.fly = can_fly
 91 
 92     def say(self, msg):
 93         msg = '... ... ...'
 94         return msg
 95 
 96     def sonar(self):
 97         return '))) ... ((('
 98 
 99 # Define Batman as a child that inherits from both Superhero and Bat
100 class Batman(Superhero, Bat):
101 
102     def __init__(self, *args, **kwargs):
103         # Here, we call the constructors of the parent classes explicitly
104         Superhero.__init__(self, 'anonymous', movie=True, superpowers=['Wealthy'], *args, **kwargs)
105         Bat.__init__(self, *args, can_fly=False, **kwargs)
106 
107         self.name = 'Sad Affleck'
108 
109     def sing(self):
110         return 'nan nan nan nan nan batman!'
111 
112 
113 if __name__ == '__main__':
114     sup = Batman()
115 
116     # Calls parent method but uses its own class attribute
117     print(sup.get_species())    # => Superhuman
118 
119     # Calls overridden method
120     print(sup.sing())           # => nan nan nan nan nan batman!
121 
122     # Calls method from Human, because inheritance order matters
123     sup.say('I agree')          # => Sad Affleck: I agree
124 
125     # Call method that exists only in 2nd ancestor
126     print(sup.sonar())          # => ))) ... (((
127 
128     # Inherited class attribute
129     sup.age = 100
130     print(sup.age)              # => 100
131 
132     # Inherited attribute from 2nd ancestor whose default value was overridden.
133     print('Can I fly? ' + str(sup.fly)) # => Can I fly? False

Useful Links

The above mentioned examples gave you a glimpse of implementing the concept of Inheritance in Python. Given below are some interesting articles that will allow you to explore these ideas further:

File I/O using Python

A file is a named location on disk to store related information. It is used to permanently store data in a non-volatile memory. When we want to read from or write to a file we need to open it first. When we are done, it needs to be closed, so that resources that are tied with the file are freed. Hence, in Python, a file operation takes place in the following order.

  • Open a file
  • Read or write (perform operation)
  • Close the file

Opening a File

Python has a built-in function open() to open a file. This function returns a file object, also called a handle, as it is used to read or modify the file accordingly. We can specify the mode while opening a file. In mode, we specify whether we want to read 'r', write 'w' or append 'a' to the file. We also specify if we want to open the file in text mode (default) or binary mode ('b'). Specifying '+' indicated file is open for both reading & writing.

>>> f = open("test.txt")      # equivalent to 'r' or 'rt'
>>> f = open("test.txt",'w')  # write in text mode
>>> f = open("img.bmp",'r+b') # read and write in binary mode

Reading From a File

To read a file in Python, we must open the file in reading mode. There are several inbuilt functions to read characters & lines from a file.

>>> f = open("test.txt",'r')
>>> f.read(4)                    # read the first 4 characters
>>> f.read(5)                    # read the next 5 characters
>>> f.read()                     # read upto the end of file (EOF)
>>> f.tell()                     # get current position of the file pointer
>>> f.seek()                     # bring file cursor to initial position

>>> f.readline()                 # read an individual line of the file
>>> f.readlines()                # return a python list with each element being a line of the file

Writing To A File

In order to write into a file in Python, we need to open it in write 'w' (Caution: This will erase all existing contents of the file, if any), append 'a' (data will default to be added to the end of the file) or exclusive creation 'x' mode.

>>> f = open("test.txt",'w')
>>> f.write("my first file\n")
>>> f.write("This file\n\n")
>>> f.write("contains three lines\n")

Closing a File

Closing a file will free up the resources that were tied with the file and is done using Python close() method. After you close a file, you can’t access it any longer until you reopen it at a later date. Attempting to read from or write to a closed file object will throw a ValueError exception:

>>> f = open("/test.txt", "w")
>>> f.close()
>>> f.read()
Traceback (most recent call last):
  File "<input>", line 1, in <module>
    f.read()
ValueError: I/O operation on closed file.

A cleaner approach would be to use the with keyword withing which a file object can be scoped & referenced. This removes the unnecessary hassle of opening & closing files & problems associated with manipulating a closed file or closing the same file multiple times unknowingly.

with open("test.txt", "r+") as file:
    # File object is now open.
    # Do stuff with the file:
    file.read()

# File object is now closed.
# Do other things...

A Complete Example

Since we now know the basics of file handling, we can combine this knowledge with other aspects of python such as string manipulation, list manipulation, etc & deal with files efficiently. Here is an example of inserting a piece of text into the middle of a file:

# Open the file as read-only
with open("test.txt", "r") as file:
    contents = file.readlines()

contents.insert(1, "This goes between line 1 and 2\n")

# Re-open in write-only format to overwrite old file
with open("test.txt", "w") as file:
    contents = "".join(contents)
    file.write(contents)

Introduction to Data Analysis using Python

The term Data Analysis has become a buzzword in the industry now & the 2 major languages used in this domain are Python & R. In this tutorial, we try to familiarize readers with the libraries to perform basic data analysis in Python. We will demonstrate some core functions offered by these libraries & encourage readers to explore more using the links provided towards to end or otherwise.

Necessary Libraries

Python offers a diverse range of libraries for Data Science, Data Visualization, classical Machine Learning, etc, all of which come under the umbrella of Data Analysis. Some important libraries that we will explore ahead are:

  • NumPy: Contains functions for operating on n-dimensional arrays along with basic linear algebra functions, Fourier transforms, advanced random number capabilities
  • Pandas: For structured data operations and manipulations & is extensively used for data munging and preparation
  • MatPlotLib: For plotting vast variety of graphs, starting from histograms to line plots to heat plots

There are several other libraries beyond what we cover here: SciPy, Seaborn, Scikit Learn, Statsmodels, etc. We encourage you to go through their documentation as well and try their functionalities as well.

Work Environment Setup

We start by first creating a virtual environment where we will install all the necessary libraries to follow along this tutorial. At its core, the main purpose of Python virtual environments is to create an isolated environment for Python projects. This means that each project can have its own dependencies, regardless of what dependencies every other project has. There are no limits to the number of environments you can have since they’re just directories containing a few scripts.

$ pip3 install virtualenv                        # Install the library to create & maintain virtual environments in python
$ python3 -m venv <path/to/venv>                 # Create a virtual environment, eg: python3 -m venv ~/venvs/dataAnalysis
$ source <path/to/venv>/bin/activate             # Activate the virtual environment, eg: source ~/venvs/dataAnalysis/bin/activate

$ pip install numpy pandas matplotlib ipython    # Within the venv now, install necessary libraries

# Now, fire up the interactive python console & follow along
$ ipython

Note: To come out of the venv, simply type deactivate in the terminal

Working with the Data

Once our libraries have been installed, we are all set to embark on this journey of Data Analysis. Start the interactive python console using ipython & follow along:

# First import all necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline

##############################################################################################################################################################

# Most datasets are available in .csv format (Comma Separated Values). Pandas has an inbuilt function to load a *.csv file as a DataFrame for further analysis
df = pd.read_csv("https://raw.githubusercontent.com/wncc/CodeInQuarantine/master/Week_1_Python/toy_dataset.csv")
# The above link can be replaced by any *.csv file available online or locally

df.head()                      # Look at the top few rows of the dataset (default: 10 rows)

##############################################################################################################################################################

# As you would have seen, there has been an addition of an extra column of indices. 
# In some cases this is useful, but for this dataset we may want to use the 'Number' column as the index. This is done by:
df.set_index('Number', inplace=True)   # The 'in_place' argument makes the changes in the dataframe, without need to store the result in a new dataframe
df.describe()                          # Look at summary of numerical fields
df.columns                             # Get the column headers of the dataframe
cities = df["City"].unique()           # Get the list of unique cities involved in the dataset
dims = df.shape                        # Get the dimensions of the data (dims[0] gives no. of rows while dims[1] gives no. of columns)

##############################################################################################################################################################

# Select rows & columns
col_income = df["Income"]    # Select the 'Income' column
rows = df.iloc[0]            # Select the first row of the dataset
rows = df.iloc[0:9]          # Select the first 10 row of the dataset

# Get the mean, median & mode of Age & Income fields
income_stats = [df['Income'].mean(), df['Income'].mode()[0], df['Income'].median()]              # Use respective functions on the column 'Income'
age_stats = [df['Age'].mean(), df['Age'].mode()[0], df['Age'].median()]                          # Use respective functions on the column 'Age'
index = ['Mean', 'Mode', 'Median']                                                               # Set index values for the new dataframe to be created
mean_mode_median = pd.DataFrame({'Income' : income_stats, 'Age' : age_stats}, index=index)       # Create a new dataframe with mean, median & mode
print(mean_mode_median)

##############################################################################################################################################################

# Filtering data based on column values
filtered = df.loc[df["Age"] < 35]                              # Select all rows where 'Age' is less than 35
print(filtered)
filtered = df.iloc[(df["Age"] < 35) & (df["Income"] > 90000)]  # For conditions on multiple columns, enclose each condition within (...) and separate by an '&'
print(filtered)

##############################################################################################################################################################

# Apply transforms to the data & add new columns to a dataframe

# First, we define a function that categorizes people based on their age
def age_div(age):
    if age < 35:
        return 1
    if age < 45:
        return 2
    if age < 55:
        return 3
    else:
        return 4

df['Age_category'] = df["Age"].apply(age_div)      # We 'apply' the function to the "Age" column & store the results in a new column called "Age_category"
df.head()

##############################################################################################################################################################

# Group & aggregate data by certain categorical variables
df.groupby("City").agg("mean")          # We group the data by 'City' & calculate respective means of the rows for numerical data. 
# The `.agg()` function can take 13 different arguments like mean, median, count, size, var, std, describe, etc. It can also accept a list containing parameters.

##############################################################################################################################################################

# Some more statistics with the data

df[["Age", "Income"]].cov()            # Find the covariance matrix between the parameters mentioned in the list, here "Age" & "Income"
df[["Age", "Income"]].corr()           # Find the correlation matrix between the parameters mentioned in the list, here "Age" & "Income"

##############################################################################################################################################################

# Simple Visualize the data

df.hist()                                             # Create histograms of all columns with numerical data
df["Illness"].value_counts().plot.barh()              # Create a horizontal bar chart of number of datapoints grouped by 'Illness'
df.boxplot(column="Income")                           # Create  simple box plot based on "Income"
df.boxplot(column="Income", by="Age_category")        # Create box plots for "Income" for each of the unique values in "Age_category"
df.groupby('City')['Income'].agg('mean').plot.bar()   # Visualize the mean income for people grouped by their "City" as bar graph

# Create a stacked bar graph of the result of cross tabulation between "City" & "Gender". We set 'normalize = 0' so that the sum along a row of result = 1
pd.crosstab(df["City"],df["Gender"], normalize = 0).plot.bar(stacked = True)

##############################################################################################################################################################

# Advanced visualization using matplotlib

# Compare Income vs Gender
plt.figure(figsize=(16,7))                                           # Set size of the figure to be plotted
plt.xlabel('Income')                                                 # Label the X-axis
plt.ylabel('Freqency')                                               # Label the Y-axis

# Now plot the histograms for male & female incomes respectively. The 'bin' indicate the number of sections that the X-axis should be divided into 
# & 'label' indicates the label of the particular plot while being shown in the legend
n, bins, patches = plt.hist(df[df["Gender"] == 'Male']["Income"], bins = 150, color = 'gold', label = 'Male Income')          
n, bins, patches = plt.hist(df[df["Gender"] == 'Female']["Income"], bins = 150, color = 'crimson', label = 'Female Income')
# You can play around with the number of bins & see the granularity of the histogram

plt.legend(loc='upper right')                                        # Create a legend & position it in the upper right corner
plt.title('Income Frequency')                                        # Set the title of the plot

# As an exercise, try and create a similar visualization of Income vs City

##############################################################################################################################################################

pd.to_csv("saved_dataframe.csv")    # Save a dataframe as a *.csv file

Useful Resources

The above tutorial gives you a glimpse of some basic data analysis done using Python. Following are a set of links that will help you to explore these libraries further & develop you skills in Data Analysis:

Web Scraping with Python

Imagine you are looking for a job online. There are hundreds of websites and job profiles on all these websites but you want to shortlist companies and profiles of your interest. One approach would be to manually visit all these websites, search for relevant job profiles, go through all of them and then list down the company names that you are interested in. But what if you could have a program do all this for you and give you just a list of offers with job profile of your liking. Web scraping is the technique of automating this process so that instead of manually copying the data from websites, the Web Scraping software will perform the same task within a fraction of the time.

There are mainly two ways to extract data from a website:

  • Extract data by accessing and manipulating the HTML content of the website. This technique is called web scraping or web harvesting or web data extraction.
  • Most websites also provide APIs that allow you to access their data in a predefined manner. ( see the next section)

Web Scraping can be used to compare product reviews/prices from various e-commerce sites, monitor social media to gather the latest trends/hashtags. You can also automate your browser to do tasks such as buying your favourite band's concert tickets as soon as they go up for sale, notify you if your exam results are available and much more.

This section will cover how to implement web scraping using python.

Steps involved in web scraping

There are a number of steps involved in scraping the website. The advantage of using Python is that there are many python libraries and modules which can be used for different steps involved in web scraping as well as further manipulation of the data extracted. The easiest way to install external libraries in python is to use pip. pip is a package management system used to install and manage software packages written in Python.

  • Accessing the HTML content of the website of your choice. For this, we need to send an HTTP request to the URL of the website which will return the HTML content. This is done using an HTTP library/module for python-requests. The python Requests module is used for this purpose. Requests module allows sending different HTTP requests like GET, POST very easily. To install requests module using pip, use pip install requests
  • Thus, we have access to the HTML content. Now we need to parse the HTML content, i.e. analyse and identify the parts of the HTML content to extract the relevant data. For this purpose, another python library comes to use - Beautiful soup. Beautiful Soup is a Python package for parsing HTML and XML documents. It creates a parse tree for parsed pages that can be used to extract data from HTML. To install requests module using pip, use pip install bs4 or pip install beautifulsoup4
  • We have access to the HTML and Beautiful soup functions to parse the HTML content. The important step is to observe and inspect the HTML structure of the website and to figure out what data you want to extract and what is the structure of the HTML tags surrounding that data. Use this knowledge then to write your code and extract the required data.

Example - Follow this link for a complete tutorial on web scraping - starting from inspecting the website to finally writing the code to extract the data. You may choose to skip the introduction section given on that website and directly start from here

Additional Resources

  • This is an amazing tutorial to help you get started. Brownie points for doing the Practice Projects mentioned in the end. This website is your go-to for exploring the power of python for automating various processes.
  • When websites are dynamic and require some sort of interaction(clicking, hovering, entering text) to reveal data, browser automation comes in handy. Selenium is one of the best browser automation tools available. Check out the excellent unofficial documentation on Selenium. The link in the previous point also explains the use of selenium.

Using APIs to Leverage Third-Party Applications

An application programming interface (API) is a set of subroutine definitions, protocols, and tools for building application software. In general terms, it is a set of clearly defined methods of communication between various software component

As we saw before, in many applications we may need to extract the data provided by a website. Most websites these days provide APIs i.e. Application Programming Interface which at its most basic acts as a door or window into a software program, allowing other programs to interact with it without the need to share its entire code. The importance of APIs from a technical standpoint is that they allow the capabilities of one computer program to be used by another. They are a means by which two different programs are able to communicate. An API lists a bunch of operations that developers can use, along with a description of what they do. The developer doesn’t necessarily need to know how these operations are implemented, rather he makes use of these APIs to build new products.

Sending API requests

When we want to receive data from an API, we need to make a HTTP request and the API sends a response. In order to work with APIs in Python, we need tools that will make those requests. In Python, the most common library for making requests and working with APIs is the Requests library. To install requests module using pip, use pip install requests

The response received from an API is usually in the JSON format.JSON (JavaScript Object Notation) is the language of APIs. JSON is a way to encode data structures that ensures that they are easily readable by machines. JSON is the primary format in which data is passed back and forth to APIs, and most API servers will send their responses in JSON format. A JSON object can be considered as a combination of Python dictionaries, lists, strings and integers represented as strings.

Python has great JSON support with the json package. The json package is part of the standard library, so we don’t have to install anything to use it. We can both convert lists and dictionaries to JSON, and convert strings to lists and dictionaries.

The json library has two main functions:

  • json.dumps() — Takes in a Python object, and converts (dumps) it to a string.
  • json.loads() — Takes a JSON string, and converts (loads) it to a Python object.

We use these functions to handle i.e. manipulate and use the data received in the response by the API requests (or API calls as it is called ). Let us now see an example of how to send API requests using python.

Example Program

News API is a simple HTTP API for searching and retrieving live news articles from all over the web. Using this, one can fetch the top stories running on any news website or can search top news on specific topic (or keyword). To use this API we need an API key which acts as authentication. To generate your key, register here. Use this generated key in place of 'api_key' in the following code.

Install the Requests library used for sending requests using pip install requests

NewsAPI offers three endpoints:

  • '/v2/top-headlines', for the most important headlines per country and category
  • '/v2/everything', for all the news articles from over 30,000 sources
  • '/v2/sources', for information on the various sources

In this example we want to find all articles published today that mention Apple and sort them by most popular source first.

 1 import requests
 2 
 3 # This is a dummy API key we are using. Replace it with the API key that you generated
 4 api_key = '***'
 5 
 6 # Define the endpoint
 7 url = 'https://newsapi.org/v2/everything?'
 8 
 9 # Specify the query and number of returns
10 parameters = {
11     'q': 'Apple',  # query phrase , i.e. the word that you are looking for in the articles
12     'from' : '2020-04-05',
13     'sortBy' : 'popularity',
14     'apiKey' : api_key # your own API key
15 }
16 
17 response = requests.get(url, params=parameters)
18 
19 # We thus made the request. The second argument to the get function is the parameters that go with the URL. The same can be achieved by sending a get request to API in this way.
20 #url1= 'http://newsapi.org/v2/everything?q=Apple&from=2020-04-05&sortBy=popularity&apiKey=api_key'
21 #response=requests.get(url1)
22 
23 
24 # Convert the response to JSON format and print it
25 response_json = response.json()
26 
27 # Print the response
28 print(response_json)

The response has been converted to JSON format and we get the following output on print.

{
"status": "ok",
"totalResults": 657,
"articles": [
	{
	"source": {
		"id": "engadget",
		"name": "Engadget"
		},
	"author": "Jon Fingas",
	"title": "ABC News Live starts streaming on Android TV and Fire TV",
	"description": "ABC is expanding access to its live news app right at a moment when as-it-happens updates are particularly vital. The broadcaster has released versions of its ABC News Live app for Android TV sets and Fire TV, giving many more people access to its mix of anch…",
	"url": "https://www.engadget.com/2020-04-05-abc-news-live-android-tv-fire-tv.html",
	"urlToImage": "https://o.aolcdn.com/images/dims?resize=1200%2C630&crop=1200%2C630%2C0%2C0&quality=80&image_uri=https%3A%2F%2Fs.yimg.com%2Fuu%2Fapi%2Fres%2F1.2%2F9KgcMgQdnCpICUtvvS4Hfw--%7EB%2Fdz0xNjAwO2g9OTAwO2FwcGlkPXl0YWNoeW9u%2Fhttps%3A%2F%2Fo.aolcdn.com%2Fimages%2Fdims%3Fcrop%3D1600%252C900%252C0%252C0%26quality%3D85%26format%3Djpg%26resize%3D1600%252C900%26image_uri%3Dhttps%3A%2F%2Fs.yimg.com%2Fos%2Fcreatr-uploaded-images%2F2020-04%2Fb4f52780-76cd-11ea-9b57-12b820a3ccdf%26client%3Da1acac3e1b3290917d92%26signature%3Dd6bce718af718cbf169d753b644009a9f3128812&client=amp-blogside-v2&signature=5467bab10a32f1370f720b47d3fafd3c4dad0ece",
	"publishedAt": "2020-04-05T09:41:00Z",
	"content": "Accordingly, you can expect the redesign to reach the Apple TV and Roku apps in mid-April. And if you prefer local news, eight ABC-owned TV stations in major markets will have native apps for Android TV, Apple TV, Fire TV and Roku devices. Los Angeles' KABC-T… [+219 chars]"
	},
	{
        "source": {
                "id": null,
                "name": "Applesfera.com"
                },
        "author": "Eduardo Archanco",
        "title": "El papel de una llamada telefónica en la adquisición de NeXT que cambiaría para siempre la historia de Apple",
        "description": "Esta semana Apple cumplía 44 años. Para una empresa tecnológica, esto ya son bastantes. Motivo más que suficiente para haber pasado por todo tipo de situaciones peculiares. Momentos en los que el destino de la compañía se sostenía sobre el filo de una navaja.…",
        "url": "https://www.applesfera.com/general/papel-llamada-telefonica-adquisicion-next-que-cambiaria-para-siempre-historia-apple",
        "urlToImage": "https://i.blogs.es/0b31e8/cgkzyx1sjtd5_1532447784613.png/840_560.jpeg",
        "publishedAt": "2020-04-05T08:00:55Z",
        "content": "Esta semana Apple cumplía 44 años. Para una empresa tecnológica, esto ya son bastantes. Motivo más que suficiente para haber pasado por todo tipo de situaciones peculiares. Momentos en los que el destino de la compañía se sostenía sobre el filo de una navaja.… [+4141 chars]"
       }
  ]
}

We have shown only two articles in the response as example. Actually we will get a large number of articles containing the word "apple" sorted by popularity. Now that we have the response i.e. the list of articles we should be able to use this data. We observe the response is like a dictionary where the articles tag is a list of dictionaries containing details of each article.

32 for i in response_json['articles']:
33     print(i['title'])
34 
35 # Output-
36 # ABC News Live starts streaming on Android TV and Fire TV
37 # El papel de una llamada telefónica en la adquisición de NeXT que cambiaría para siempre la historia de Apple

This was just one of the many applications possible with the NewsAPI. Explore their official documentation for information on how to use the other API endpoints.

Additional Resources

  • This is a complete tutorial which covers all the basics of APIs and uses an example API that retrieves data about the International Space Station (ISS).
  • Use this to read more about APIs, why are they so popular and various applications of using APIs.
  • Another tutorial handling two example APIs for beginners