Working with requests library in Python

Working with requests library in Python

First of all, what is the request?

In the web world, a request is an instruction or message that a client sends to a server to retrieve information or perform an action.

When you open a web page or an application on your computer or smartphone, your device sends a request to a server that hosts the web page or the application. The request typically includes information about what the client is requesting, such as a specific page, an image, or data from a database.

The server then processes the request and sends a response back to the client. The response can include the requested information, or an error message if the request could not be fulfilled.

Requests and responses are the basic building blocks of web communication and are essential for the functioning of websites, web applications, and other online services.

History

The concept of requesting and receiving data over the Internet has been around since the early days of computer networking. However, the first widely-used protocol for web requests and responses, the Hypertext Transfer Protocol (HTTP), was invented in 1989 by Tim Berners-Lee, a computer scientist at CERN (the European Organization for Nuclear Research).

HTTP was originally designed to facilitate the transfer of static web pages, but it has since evolved to support dynamic web content, streaming media, and other advanced features.

Over time, many other protocols and technologies have been developed to complement HTTP and enable more sophisticated web applications, such as JavaScript, XML, REST, and WebSocket. The evolution of web technologies has enabled the development of powerful and interactive web applications that have transformed many aspects of our daily lives.

Tools

There are various tools available that you can use to make requests in the web world. Here are some common tools:

Web browsers: Web browsers such as Google Chrome, Firefox, and Safari allow you to make requests by typing in a URL in the address bar or clicking on a link. The browser sends a request to the server, which then sends back the requested information that is displayed in the browser.

cURL: cURL is a command-line tool that allows you to make requests to servers using various protocols including HTTP, FTP, SMTP, and more. cURL can be used to test APIs, download files, and perform other web-related tasks.

Postman: Postman is a popular API development tool that allows you to create and test HTTP requests, including GET, POST, PUT, DELETE, and more. With Postman, you can also view the response data, set headers, and perform other advanced tasks.

Insomnia: Insomnia is another API development tool that allows you to make HTTP requests and view responses in real-time. Insomnia supports various authentication methods, environments, and plugins to customize your workflow.

These are just a few examples of the tools available to make requests in the web world. Depending on your needs, there may be other tools that are more suitable for your specific use case

cURL

here is an example of how to use cURL to make an HTTP GET request to retrieve information from a website:

curl https://www.example.com

This command sends an HTTP GET request to the URL https://www.example.com, and displays the response from the server in the terminal.

You can also add additional options to the cURL command to customize the request. For example, you can use the -H option to add headers to the request:

curl -H "Accept: application/json" https://api.example.com/data

In this example, the -H option sets the Accept header to "application/json", indicating that the client expects to receive JSON-formatted data in the response. The URL https://api.example.com/data is the endpoint of an API that returns JSON data.

cURL is a powerful tool with many options and features. You can refer to the cURL documentation for more information and examples of how to use it to make various types of requests.

Python Requests Library

Requests is a popular Python library for making HTTP requests. It simplifies the process of making HTTP requests and handling the response data by providing a user-friendly API that abstracts away much of the low-level details of working with HTTP.

With Requests, you can make various types of requests including GET, POST, PUT, DELETE, and more, and easily handle the response data in various formats such as JSON, XML, and text.

The Requests library is useful for a variety of tasks related to making HTTP requests in Python. Here are some common use cases where Requests can be helpful:

  1. Web scraping: Requests can be used to retrieve HTML content from websites, which can then be parsed and analyzed using other Python libraries like Beautiful Soup or Scrapy.

  2. API development: Requests can be used to create and test HTTP requests to web APIs, making it easier to develop and debug API clients.

  3. Data extraction: Requests can be used to retrieve data from web APIs, RSS feeds, or other web services and transform it into a format that is more useful for your application.

  4. Testing: Requests can be used to automate tests for web applications by simulating user interactions and verifying the responses.

  5. Debugging: Requests can be used to debug web applications by inspecting the headers and content of requests and responses.

Overall, Requests is a powerful and flexible library that can be used for many web-related tasks in Python. It's simple API and extensive documentation make it a popular choice among developers for working with HTTP requests.

Let's use it!

To create a virtual environment (venv) in Python, you can follow these steps:

  1. Open a command prompt or terminal window.

  2. Navigate to the directory where you want to create the virtual environment.

  3. Run the following command to create a new virtual environment:

    On Windows:

python -m venv myenv

On Linux/MacOS:

python3 -m venv myenv
  1. Here, myenv is the name of your virtual environment. You can use any name you like.

  2. Wait for the virtual environment to be created. This may take a few moments.

Once the virtual environment is created, you can activate it by running the following command:

On Windows:

myenv\Scripts\activate

On Linux/MacOS:

source myenv/bin/activate

After activating the virtual environment, you can install any packages you need using pip, without affecting the packages installed globally on your system. When you're done working with the virtual environment, you can deactivate it by running the command deactivate.

To install requests and BeautifulSoup in your virtual environment run the following command to install requests and BeautifulSoup using pip:

pip install requests beautifulsoup4

This will download and install both packages and their dependencies in your virtual environment.

now create a main.py file write a simple application that scrapes websites.

import sys
import requests
from bs4 import BeautifulSoup


def read_arguments(args):
    if len(args) == 4:
        return args[1], args[2], args[3]
    elif len(args) == 3:
        return args[1], args[2], None
    elif len(args) == 2:
        return args[1], None, None
    else:
        return None, None, None


def get_website_contents(site_address):
    site = requests.get(url=site_address)
    try:
        site.raise_for_status()
    except Exception as exception:
        print("Error encountered: %s" % exception)
        return None
    return site.content


def scrape_between_tags(content, tag):
    soup = BeautifulSoup(content, 'html.parser')
    result = soup.find('div', attrs={'class': tag})
    try:
        result_text = result.text.strip()
        return result_text
    except AttributeError:
        print('result not found')


def write_content_to_file(content, filename):
    content_byte = bytearray(content, 'utf-8')
    try:
        with open(filename, 'wb') as file:
            file.write(content_byte)
    except IOError as exception:
        print("Couldn't save the file. Encountered an error: %s" % exception)


if __name__ == '__main__':
    site_url, attr_tag, filename = read_arguments(sys.argv)
    if site_url is None:
        print('pass a url to terminal: python main.py url')
    if attr_tag is None:
        print('pass a class tag to scrape it')
    else:
        raw_content = get_website_contents(site_url)
        content = scrape_between_tags(raw_content, attr_tag)
        if content is None:
            print('content not found in website')
        else:
            if filename is not None:
                write_content_to_file(content, filename)
            else:
                while True:
                    user_choice = input(
                        "Give a name where the contents of the website will be saved, or (q)uit: "
                    ).strip()

                    if user_choice == ('Q', 'q'):
                        break
                    elif user_choice == '':
                        print('You have to insert a filename')
                    else:
                        write_content_to_file(content, user_choice)
                        break

This is a Python script that scrapes content from a website and saves it to a file. Here's how it works step by step:

  1. The script imports the necessary modules: sys for reading command-line arguments, requests for making HTTP requests to the website, and BeautifulSoup for parsing the HTML content.

  2. The read_arguments() function takes in the command-line arguments passed to the script and returns three values: the URL of the website to scrape, the HTML class tag to search for in the content, and the filename to save the content to.

  3. The get_website_contents() function takes in the website URL and uses the requests module to make a GET request to the website. If the request is successful, the function returns the raw HTML content of the website. If there is an error, the function prints an error message and returns None.

  4. The scrape_between_tags() function takes in the raw HTML content of the website and the class tag to search for. It uses the BeautifulSoup module to parse the HTML and find the first element with the specified class. If the element is found, the function returns the text content of the element with leading and trailing whitespace removed. If the element is not found, the function prints a message and returns None.

  5. The write_content_to_file() function takes in the content to be saved and the filename to save it to. It converts the content to a byte array and writes it to the file using the with statement. If there is an error, the function prints an error message.

  6. The main block of the script checks if the __name__ variable is equal to '__main__'. This is used to ensure that the code in the main block is only executed when the script is run directly and not when it is imported as a module.

  7. The main block calls the read_arguments() function to get the URL, class tag, and filename from the command-line arguments.

  8. The main block checks if the URL and class tag are not None. If either of them is None, it prints a message and exits the script.

  9. The main block calls the get_website_contents() function to get the raw HTML content of the website.

  10. The main block calls the scrape_between_tags() function to extract the content between the specified class tags.

  11. The main block checks if the content is None. If it is, it prints a message and exits the script.

  12. The main block checks if the filename is not None. If it is not None, it calls the write_content_to_file() function to save the content to the specified file.

  13. If the filename is None, the script enters a loop that prompts the user to enter a filename to save the content to. If the user enters a valid filename, the script calls the write_content_to_file() function to save the content to the file and exits the loop. If the user enters a blank string, the script prompts the user to enter a filename again. If the user enters 'q' or 'Q', the script exits the loop and the script ends.

let's test it :

thank you