Raspando el sitio web de mi profesor de CS, luego envíame un correo electrónico cuando se actualiza el sitio -- python campo con performance campo con python-3.x campo con web-scraping campo con beautifulsoup camp codereview Relacionados El problema

Scraping my CS teacher's website, then emailing me when the site is updated


11
vote

problema

Español

He estado trabajando en la creación de un proyecto final individual para mi clase de Python CS que revisa el sitio web de mi maestro a diario y determina si ha cambiado ninguna de las páginas web en su sitio web desde la última vez que el programa corrió o no .

¡Realmente amo algunas sugerencias para mejorar mi código, especialmente porque funciona ahora! Le he agregado alguna funcionalidad, para que se ejecute a través de un trabajo de CRON en un servidor de nubes y envíe un correo electrónico cuando cambie una página!

  import requests ## downloads the html from bs4 import BeautifulSoup ## parses the html import filecmp ## compares files import os, sys ## used for renaming files import difflib ## used to see differences in link files import smtplib ## used for sending email from email.mime.multipart import MIMEMultipart ## used for areas of email such as subject, toaddr, fromaddr, etc. from email.mime.text import MIMEText ## used for areas of email such as body, etc.  root_url = "https://sites.google.com" index_url = root_url + "/site/csc110winter2015/home"  def get_site_links():     '''     Gets links from the website's list items' HTML elements     '''     response = requests.get(index_url)     soup = BeautifulSoup(response.text)     links =  [a.attrs.get('href') for a in soup.select('li.topLevel a[href^=/site/csc110winter2015/]')]     return links  def try_read_links_file():     '''     Tries to read the links.txt file; if links.txt is found, then rename links.txt to previous_links.txt     '''     try:         os.rename("links.txt", "previous_links.txt")         write_links_file()     except (OSError, IOError):         print("No links.txt file exists; creating one now.")         write_links_file()         try_read_links_file()  def write_links_file():     '''     Writes the links.txt file from the website's links      '''     links = get_site_links()     with open("links.txt", mode='wt', encoding='utf-8') as out_file:         out_file.write(' '.join(links))  def check_links():       '''     Checks to see if links have changed since the last time the program was run.     '''     if filecmp.cmp("links.txt", "previous_links.txt") == True:         ## If link data hasn't changed, do nothing         pass     else:         ## Checks to see what changes, if any, have been made to the links, and outputs them to the console         d = difflib.Differ()         previous_links = open("previous_links.txt").readlines()         links =  open("links.txt").readlines()         diff = d.compare(previous_links, links)         for difference in diff:             if '- ' in difference:                 print(difference.strip() + " Was a removed page from the CSC110 website since the last time checked. ")             elif '+ ' in difference:                 print(difference.strip() + " Was an added page to the CSC110 website since the last time checked. ")  def try_read_pages_files():     '''     Tries to read the pages .txt files; if pages .txt are found, then rename the pages .txt files to previous_ pages .txt     '''     with open("links.txt", mode='r', encoding='utf-8') as pages:         for page in pages:             try:                 os.rename(page.replace("/",".") + ".txt", "previous_" + page.replace("/",".") + ".txt")             except (OSError, IOError):                 print("No pages .txt file exists; creating them now.")                 write_pages_files()                 try_read_pages_files()                 ## Note that the call to write_pages_files() is outside the loop         write_pages_files()  def write_pages_files():     '''     Writes the various page files from the website's links      '''     with open("links.txt") as links:         for page in links:             site_page = requests.get(root_url + page.strip())             soup = BeautifulSoup(site_page.text)             souped_up =  soup.find_all('div', class_= "sites-attachments-row")             with open(page.replace("/",".") + ".txt", mode='wt', encoding='utf-8') as out_file:                 out_file.write(str(souped_up))  def check_pages():     '''     Checks to see if pages have changed since the last time the program was run.     '''     with open("links.txt") as links:         changed_pages = []         for page in links:             page = page.replace("/",".")             if filecmp.cmp("previous_" + page + ".txt", page + ".txt") == True:                 ## If page data hasn't changed, do nothing                 pass             else:                 ## If page data has changed, then write the changed page data to a list                 if page == '.site.csc110winter2015.system.app.pages.sitemap.hierarchy':                     pass                 else:                     changed_pages.append(root_url + page.replace(".","/").strip())         return changed_pages  def send_mail():     server = smtplib.SMTP('smtp.gmail.com', 587)     ## Say ehlo to my lil' friend!     server.ehlo()     ## Start Transport Layer Security for Gmail     server.starttls()     server.ehlo()     if check_pages():         ## Setting up the email         server.login("Sending Email", "Password")         fromaddr = "Sending Email"         toaddr = "Receiving Email"         msg = MIMEMultipart()         msg['From'] = fromaddr         msg['To'] = toaddr         msg['Subject'] = "Incoming CSC110 website changes!"         # Can't return list and concatenate string; implemented here for check_pages()         changed_pages =  "The following page(s) have been updated:  " + str(check_pages())         msg.attach(MIMEText(changed_pages, 'plain'))         text = msg.as_string()         server.sendmail(fromaddr, toaddr, text)  def main():     try_read_links_file()     try_read_pages_files()     check_links()     check_pages()     send_mail() main()   
Original en ingles

I've been working on creating an individual final project for my python CS class that checks my teacher's website on a daily basis and determines if he's changed any of the web pages on his website since the last time the program ran or not.

I would really love some suggestions for improving my code, especially since it works now! I have added some functionality to it, so that it runs via a cron job on a cloud server and sends out an email when a page changes!

import requests ## downloads the html from bs4 import BeautifulSoup ## parses the html import filecmp ## compares files import os, sys ## used for renaming files import difflib ## used to see differences in link files import smtplib ## used for sending email from email.mime.multipart import MIMEMultipart ## used for areas of email such as subject, toaddr, fromaddr, etc. from email.mime.text import MIMEText ## used for areas of email such as body, etc.  root_url = "https://sites.google.com" index_url = root_url + "/site/csc110winter2015/home"  def get_site_links():     '''     Gets links from the website's list items' HTML elements     '''     response = requests.get(index_url)     soup = BeautifulSoup(response.text)     links =  [a.attrs.get('href') for a in soup.select('li.topLevel a[href^=/site/csc110winter2015/]')]     return links  def try_read_links_file():     '''     Tries to read the links.txt file; if links.txt is found, then rename links.txt to previous_links.txt     '''     try:         os.rename("links.txt", "previous_links.txt")         write_links_file()     except (OSError, IOError):         print("No links.txt file exists; creating one now.")         write_links_file()         try_read_links_file()  def write_links_file():     '''     Writes the links.txt file from the website's links      '''     links = get_site_links()     with open("links.txt", mode='wt', encoding='utf-8') as out_file:         out_file.write('\n'.join(links))  def check_links():       '''     Checks to see if links have changed since the last time the program was run.     '''     if filecmp.cmp("links.txt", "previous_links.txt") == True:         ## If link data hasn't changed, do nothing         pass     else:         ## Checks to see what changes, if any, have been made to the links, and outputs them to the console         d = difflib.Differ()         previous_links = open("previous_links.txt").readlines()         links =  open("links.txt").readlines()         diff = d.compare(previous_links, links)         for difference in diff:             if '- ' in difference:                 print(difference.strip() + "\nWas a removed page from the CSC110 website since the last time checked.\n")             elif '+ ' in difference:                 print(difference.strip() + "\nWas an added page to the CSC110 website since the last time checked.\n")  def try_read_pages_files():     '''     Tries to read the pages .txt files; if pages .txt are found, then rename the pages .txt files to previous_ pages .txt     '''     with open("links.txt", mode='r', encoding='utf-8') as pages:         for page in pages:             try:                 os.rename(page.replace("/",".") + ".txt", "previous_" + page.replace("/",".") + ".txt")             except (OSError, IOError):                 print("No pages .txt file exists; creating them now.")                 write_pages_files()                 try_read_pages_files()                 ## Note that the call to write_pages_files() is outside the loop         write_pages_files()  def write_pages_files():     '''     Writes the various page files from the website's links      '''     with open("links.txt") as links:         for page in links:             site_page = requests.get(root_url + page.strip())             soup = BeautifulSoup(site_page.text)             souped_up =  soup.find_all('div', class_= "sites-attachments-row")             with open(page.replace("/",".") + ".txt", mode='wt', encoding='utf-8') as out_file:                 out_file.write(str(souped_up))  def check_pages():     '''     Checks to see if pages have changed since the last time the program was run.     '''     with open("links.txt") as links:         changed_pages = []         for page in links:             page = page.replace("/",".")             if filecmp.cmp("previous_" + page + ".txt", page + ".txt") == True:                 ## If page data hasn't changed, do nothing                 pass             else:                 ## If page data has changed, then write the changed page data to a list                 if page == '.site.csc110winter2015.system.app.pages.sitemap.hierarchy':                     pass                 else:                     changed_pages.append(root_url + page.replace(".","/").strip())         return changed_pages  def send_mail():     server = smtplib.SMTP('smtp.gmail.com', 587)     ## Say ehlo to my lil' friend!     server.ehlo()     ## Start Transport Layer Security for Gmail     server.starttls()     server.ehlo()     if check_pages():         ## Setting up the email         server.login("Sending Email", "Password")         fromaddr = "Sending Email"         toaddr = "Receiving Email"         msg = MIMEMultipart()         msg['From'] = fromaddr         msg['To'] = toaddr         msg['Subject'] = "Incoming CSC110 website changes!"         # Can't return list and concatenate string; implemented here for check_pages()         changed_pages =  "The following page(s) have been updated:\n\n" + str(check_pages())         msg.attach(MIMEText(changed_pages, 'plain'))         text = msg.as_string()         server.sendmail(fromaddr, toaddr, text)  def main():     try_read_links_file()     try_read_pages_files()     check_links()     check_pages()     send_mail() main() 
              

Lista de respuestas

5
 
vote
vote
La mejor respuesta
 

BURS

El correo electrónico enumera las páginas cuyos contenidos han cambiado, pero no las páginas que se agregaron o eliminaron. Las adiciones y las eliminaciones se imprimen simplemente en sys.stdout .

Los archivos donde se guardan los contenidos de la página tienen nombres de archivo del formulario previous_.site.csc110winter2015.somethingsomething␤.txt . El carácter de nueva línea que precede a .txt es raro.

Si los enlaces están simplemente reordenados, verá que se informa como una eliminación y adición.

Si try_read_links_file() no puede crear links.txt (Debido a los permisos de directorio, por ejemplo), se mostrará infinitamente.

INEFICIENCIAS

Llame jQuery("#slideshow article:nth-of-type(" + (idx > 0 ? idx : antall) + ")").addClass("forrige"); jQuery("#slideshow article:nth-of-type(" + (idx < antall - 1 ? idx + 2 : 1) + ")").addClass("neste"); 0 hasta tres veces:

  • Una vez en jQuery("#slideshow article:nth-of-type(" + (idx > 0 ? idx : antall) + ")").addClass("forrige"); jQuery("#slideshow article:nth-of-type(" + (idx < antall - 1 ? idx + 2 : 1) + ")").addClass("neste"); 1 por ninguna razón aparente
  • Una vez en jQuery("#slideshow article:nth-of-type(" + (idx > 0 ? idx : antall) + ")").addClass("forrige"); jQuery("#slideshow article:nth-of-type(" + (idx < antall - 1 ? idx + 2 : 1) + ")").addClass("neste"); 2 en un intento aparente de verificar si se detectaron cambios. Bizarlely, este cheque se realiza después de el apretón de manos SMTP: ¿por qué molestarse en conectarse al servidor SMTP en absoluto si no tiene nada que enviar?
  • Si decides enviar correo, llame a jQuery("#slideshow article:nth-of-type(" + (idx > 0 ? idx : antall) + ")").addClass("forrige"); jQuery("#slideshow article:nth-of-type(" + (idx < antall - 1 ? idx + 2 : 1) + ")").addClass("neste"); 3 una vez más para incorporar la lista de páginas modificadas en el cuerpo del mensaje.

Crítica general

La técnica que ha empleado es Muy céntrico . Las cinco funciones que usted llama desde jQuery("#slideshow article:nth-of-type(" + (idx > 0 ? idx : antall) + ")").addClass("forrige"); jQuery("#slideshow article:nth-of-type(" + (idx < antall - 1 ? idx + 2 : 1) + ")").addClass("neste"); 4 se comunica entre sí no al pasar parámetros y valores de devolución, ni a través de variables globales, ¡pero a través del sistema de archivos! Este estilo de programación complica drásticamente el código. Cada función termina siendo preocupada por la lectura de archivos, desmonte las nuevas líneas (si recuerde hacerlo), manglando rutas y ahorrar los resultados.

jQuery("#slideshow article:nth-of-type(" + (idx > 0 ? idx : antall) + ")").addClass("forrige"); jQuery("#slideshow article:nth-of-type(" + (idx < antall - 1 ? idx + 2 : 1) + ")").addClass("neste"); 5 es increíblemente nombrado, ya que en realidad también escribe los archivos. De manera similar, 99887766555443316 tiene efectos secundarios que no esperaría.

Si solo desea detectar si el contenido ha cambiado, no necesita guardar los contenidos de todo el sitio web. Almacenar una suma de comprobación criptográfica de cada página será suficiente. Con esa percepción, puede resumir todo el sitio web en un solo archivo , una línea por página.

Sería más agradable pasar toda la URL inicial al programa, en lugar de romperlo en un 99887766655443317 y un 99887766555443318 . Además, anexar el jQuery("#slideshow article:nth-of-type(" + (idx > 0 ? idx : antall) + ")").addClass("forrige"); jQuery("#slideshow article:nth-of-type(" + (idx < antall - 1 ? idx + 2 : 1) + ")").addClass("neste"); 9 VALORES A EL import groovy.text.SimpleTemplateEngine def generate(Class klass, String packageName, String name, String template) { def engine = new SimpleTemplateEngine() def binding = [ packageName: packageName, name: name, superClass: klass.name, methods: klass.metaClass.methods .findAll { !(it.toString().contains(' final ')) } .collect { def args = [] def m = it.toString() =~ /(.*) (.*.)(.*)(((.*)).*)/ m.matches() if(m.group(5).size() > 0) { m.group(5).split(',').eachWithIndex {arg, index -> args << "$arg p$index" } } return "${m.group(1).replace('native', '')} ${m.group(3)}(${args.join(', ')})" } ] engine.createTemplate(template).make(binding).toString() } 0 Hace un desagradable supuesto de que todos los 99887766655443321 s son las URL absolutas. Use import groovy.text.SimpleTemplateEngine def generate(Class klass, String packageName, String name, String template) { def engine = new SimpleTemplateEngine() def binding = [ packageName: packageName, name: name, superClass: klass.name, methods: klass.metaClass.methods .findAll { !(it.toString().contains(' final ')) } .collect { def args = [] def m = it.toString() =~ /(.*) (.*.)(.*)(((.*)).*)/ m.matches() if(m.group(5).size() > 0) { m.group(5).split(',').eachWithIndex {arg, index -> args << "$arg p$index" } } return "${m.group(1).replace('native', '')} ${m.group(3)}(${args.join(', ')})" } ] engine.createTemplate(template).make(binding).toString() } 2 para resolver las URL en su lugar.

En import groovy.text.SimpleTemplateEngine def generate(Class klass, String packageName, String name, String template) { def engine = new SimpleTemplateEngine() def binding = [ packageName: packageName, name: name, superClass: klass.name, methods: klass.metaClass.methods .findAll { !(it.toString().contains(' final ')) } .collect { def args = [] def m = it.toString() =~ /(.*) (.*.)(.*)(((.*)).*)/ m.matches() if(m.group(5).size() > 0) { m.group(5).split(',').eachWithIndex {arg, index -> args << "$arg p$index" } } return "${m.group(1).replace('native', '')} ${m.group(3)}(${args.join(', ')})" } ] engine.createTemplate(template).make(binding).toString() } 3 , primero compita el mensaje, luego envíelo. Evite intercalar las dos operaciones. No necesita MIME multifunción si todo lo que está enviando es un mensaje de texto sin formato.

En la solución sugerida a continuación, mire import groovy.text.SimpleTemplateEngine def generate(Class klass, String packageName, String name, String template) { def engine = new SimpleTemplateEngine() def binding = [ packageName: packageName, name: name, superClass: klass.name, methods: klass.metaClass.methods .findAll { !(it.toString().contains(' final ')) } .collect { def args = [] def m = it.toString() =~ /(.*) (.*.)(.*)(((.*)).*)/ m.matches() if(m.group(5).size() > 0) { m.group(5).split(',').eachWithIndex {arg, index -> args << "$arg p$index" } } return "${m.group(1).replace('native', '')} ${m.group(3)}(${args.join(', ')})" } ] engine.createTemplate(template).make(binding).toString() } 4 para ver cómo las funciones deben interactuar entre sí.

  import groovy.text.SimpleTemplateEngine  def generate(Class klass, String packageName, String name, String template) {     def engine = new SimpleTemplateEngine()     def binding = [         packageName: packageName,         name: name,         superClass: klass.name,         methods: klass.metaClass.methods             .findAll { !(it.toString().contains(' final ')) }             .collect {                     def args = []                         def m = it.toString() =~ /(.*) (.*.)(.*)(((.*)).*)/                  m.matches()                                 if(m.group(5).size() > 0) {                     m.group(5).split(',').eachWithIndex {arg, index ->                         args << "$arg p$index"                     }                 }                  return "${m.group(1).replace('native', '')} ${m.group(3)}(${args.join(', ')})"             }     ]      engine.createTemplate(template).make(binding).toString() } 5  
 

Bugs

The e-mail lists the pages whose contents have changed, but not the pages that were added or removed. Additions and deletions are merely printed to sys.stdout.

The files where the page contents are saved have filenames of the form previous_.site.csc110winter2015.somethingsomethingxe2x90xa4.txt. The newline character preceding .txt is weird.

If the links are merely reordered, you'll see it reported as a removal and addition.

If try_read_links_file() is unable to create links.txt (due to directory permissions, for example), it will recurse infinitely.

Inefficiencies

You call check_pages() up to three times:

  • Once in main() for no apparent reason
  • Once in send_mail() in an apparent attempt to check whether any changes were detected. Bizarrely, this check is done after the SMTP handshake xe2x80x94 why bother connecting to the SMTP server at all if you have nothing to send?
  • If you do decide to send mail, then you call check_pages() once more to incorporate the list of modified pages in the message body.

General critique

The technique you have employed is very file-centric. The five functions that you call from main() communicate with each other not by passing parameters and returning values, nor via global variables, but through the filesystem! This style of programming drastically complicates the code. Every function ends up being concerned with reading files, stripping newlines (if you remember to do so), mangling paths, and saving the results.

try_read_pages_files() is misleadingly named, as it actually also writes the files. Similarly, try_read_links_file() has side-effects that I wouldn't expect.

If you just want to detect whether content has changed, you need not save the entire website's contents. Storing a cryptographic checksum of each page will suffice. With that insight, you can summarize the entire website in a single file, one line per page.

It would be nicer to pass the whole initial URL to the program, rather than breaking it up into a root_url and an index_url. Also, appending the href values to the root_url makes a nasty assumption that all of the hrefs are absolute URLs. Use urllib.parse.urljoin() to resolve URLs instead.

In send_mail(), first compose the message, then send it. Avoid interleaving the two operations. You don't need multi-part MIME if all you are sending is a plain-text message.

In the suggested solution below, look at main() to see how functions should interact with each other.

from base64 import b64encode, b64decode from bs4 import BeautifulSoup from email.mime.text import MIMEText from hashlib import sha256 from smtplib import SMTP from urllib.parse import urljoin from urllib.request import urlopen  def summarize_site(index_url):     '''     Return a dict that maps the URL to the SHA-256 sum of its page contents     for each link in the index_url.     '''     summary = {}     with urlopen(index_url) as index_req:         soup = BeautifulSoup(index_req.read())         links = [urljoin(index_url, a.attrs.get('href'))                  for a in soup.select('li.topLevel a[href^=/site/csc110winter2015/]')]         for page in links:             # Ignore the sitemap page             if page == '/site/csc110winter2015/system/app/pages/sitemap/hierarchy':                 continue                 with urlopen(page) as page_req:                 fingerprint = sha256()                 soup = BeautifulSoup(page_req.read())                 for div in soup.find_all('div', class_='sites-attachments-row'):                     fingerprint.update(div.encode())                 summary[page] = fingerprint.digest()     return summary  def save_site_summary(filename, summary):     with open(filename, 'wt', encoding='utf-8') as f:         for path, fingerprint in summary.items():             f.write("{} {}\n".format(b64encode(fingerprint).decode(), path))  def load_site_summary(filename):     summary = {}     with open(filename, 'rt', encoding='utf-8') as f:         for line in f:             fingerprint, path = line.rstrip().split(' ', 1)             summary[path] = b64decode(fingerprint)     return summary  def diff(old, new):     return {         'added': new.keys() - old.keys(),         'removed': old.keys() - new.keys(),         'modified': [page for page in set(new.keys()).intersection(old.keys())                      if old[page] != new[page]],     }  def describe_diff(diff):     desc = []     for change in ('added', 'removed', 'modified'):         if not diff[change]:             continue         desc.append('The following page(s) have been {}:\n{}'.format(             change,             '\n'.join(' ' + path for path in sorted(diff[change]))         ))     return '\n\n'.join(desc)  def send_mail(body):     ## Compose the email     fromaddr = "Sending Email"     toaddr = "Receiving Email"     msg = MIMEText(body, 'plain')     msg['From'] = fromaddr     msg['To'] = toaddr     msg['Subject'] = "Incoming CSC110 website changes!"      ## Send it     server = SMTP('smtp.gmail.com', 587)     server.ehlo()     server.starttls()     server.ehlo()     server.login("Sending Email", "Password")     server.sendmail(fromaddr, toaddr, msg.as_string())     server.quit()  def main(index_url, filename):     summary = summarize_site(index_url)     try:         prev_summary = load_site_summary(filename)             if prev_summary:             diff_description = describe_diff(diff(prev_summary, summary))             if diff_description:                 print(diff_description)                 send_mail(diff_description)     except FileNotFoundError:         pass     save_site_summary(filename, summary)  main(index_url='https://sites.google.com/site/csc110winter2015/home',      filename='site.txt') 
 
 
 
 

Relacionados problema

19  Web raspando los títulos y descripciones de los videos de la tendencia de YouTube  ( Web scraping the titles and descriptions of trending youtube videos ) 
Esto raspa los títulos y descripciones de los videos de YouTube de tendencia y los escribe a un archivo CSV. ¿Qué mejoras puedo hacer? from bs4 import Beau...

2  Compruebe la existencia de etiquetas HTML en una fila  ( Check existence of html tags in a row ) 
Uso de la hermosa sopa de Python, analizo una serie de archivos HTML (internamente: soup ). Todos son un poco diferentes y mi objetivo viene en cuatro etique...

2  Pista de página de Python  ( Python webpage parser ) 
Tengo un código de trabajo, pero tiene muchos defectos. El proyecto principal es obtener un valor de un sitio web. Necesito completar este millón de veces, ...

4  Raspando html usando sopa hermosa  ( Scraping html using beautiful soup ) 
He escrito un script usando una hermosa sopa para raspar un poco de html y hacer algunas cosas y producir HTML de vuelta. Sin embargo, no estoy convencido con...

6  Raspador web que extrae las URL de Amazon y Ebay  ( Web scraper that extracts urls from amazon and ebay ) 
Descripción: Este es un script simple para raspar la categoría, la subcategoría y las URL de la categoría, la subcategoría y el producto de Amazon y los ar...

10  Raspando la fecha del post más reciente de varios servicios de redes sociales  ( Scraping the date of most recent post from various social media services ) 
tarea Tengo una hoja de cálculo grande donde cada línea debe incluir: la URL de una cuenta de redes sociales un campo que indica si la cuenta es "act...

4  Una herramienta para extraer todos los enlaces de una página web en Python  ( A tool to extract all links from a web page in python ) 
Estoy iniciando la escritura en Python y se encarga de escribir una herramienta simple que abra una página web, recopila todas las URL (ambas completas como ...

9  Un raspador web que busca palabras predefinidas en artículos de noticias  ( A web scraper that looks for pre defined words in news articles ) 
Sigo siendo bastante nuevo en Python y Web-rasping, pero un colega me preguntó si podía construir un raspador web que podría ser utilizado por un Think Tank, ...

13  Anipop - The Anime Downloader  ( Anipop the anime downloader ) 
NOTA: Los temas de rendimiento y selenio / bs4 aún no se han abordado, ¡Así que esta pregunta todavía puede recibir una mejor respuesta! Sala de chat: h...

10  Agregando una nueva clase a la etiqueta HTML y escribiéndola con hermosa sopa  ( Adding a new class to html tag and writing it back with beautiful soup ) 
Estoy trabajando en un documento HTML al que necesito agregar ciertas clases a algunos elementos. En el siguiente código, estoy agregando la clase img-respon...




© 2022 respuesta.top Reservados todos los derechos. Centro de preguntas y respuestas reservados todos los derechos