-
카테고리
-
세부 분야
데이터 분석
-
해결 여부
미해결
질문이요
20.06.15 00:27 작성 조회수 123
2
import requests
from bs4 import BeautifulSoup
req = requests.get('https://www.donga.com/news/Entertainment/List?p=1&prod=news&ymd=&m=')
soup = BeautifulSoup(req.text, 'html.parser')
for i in soup.select("#contents > div.page > a") :
req2 = requests.get("http://www.donga.com/news/List/Enter/" + i['href'])
soup2 = BeautifulSoup(req2.text, 'html.parser')
for i in soup2.find_all("span", class_="tit") :
print(i.text)
C:\Users\karma\PycharmProjects\pychamwebcrawling\venv\Scripts\python.exe "C:/Users/karma/PycharmProjects/pychamwebcrawling/01_web_crawling_naver_test/url 링크 찾아내서 크롤링.py" Process finished with exit code 0
머가 문제인건가요???
답변을 작성해보세요.
0
0
0
0
개복치개발자
지식공유자2020.06.15
import requests
from bs4 import BeautifulSoup
req = requests.get('https://www.donga.com/news/Entertainment/List?p=1&prod=news&ymd=&m=')
soup = BeautifulSoup(req.text, 'html.parser')
print(soup.select("#content > div.page > a"))
for i in soup.select("#content > div.page > a") :
print("http://www.donga.com/news/List/Enter/" + i['href'])
답변 4