Click here to Skip to main content
15,881,709 members
Please Sign up or sign in to vote.
1.00/5 (2 votes)
See more:
I have this exercise: Identify which are the fields that must be imported from the extracted urls, what type of data must be saved for each of them. and on the imported data filter by the appropriate categories. Make code that allows such an import to be performed. This is my code but it doesn't print anything. I'm a beginner

What I have tried:

from bs4 import BeautifulSoup
import requests

url = ""
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")

# Obtener todas las noticias
noticias = soup.find_all("div", class_="articulo-contenido")

# Recorrer cada noticia y obtener los datos de interés
for noticia in noticias:
    titulo = noticia.find("h2").text.strip()
    fecha = noticia.find("div", class_="fecha").text.strip()
    categoria = noticia.find("span", class_="categoria").text.strip()
    enlace = noticia.find("a")["href"]

    # Filtrar las noticias que cumplen con cierta condición
    if enlace.startswith(""):
        print(titulo, fecha, categoria, enlace)
Richard MacCutchan 11-Apr-23 3:28am    
It is not clear what data you are trying to access. How are you supposed to answer the query: "Identify which are the fields that must be imported from the extracted urls"?
Alis Avilez 13-Apr-23 18:26pm    
the code I used of the previous exercise, from extracted urls, I will publish it in the response area, because it cannot be seen in full.
Graeme_Grant 13-Apr-23 18:40pm    
Check your class names, that class name is not found on that page. Best way to look is with the Browser Developer Tools.

I can see (class names) per post:

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900