KoD 0.5
KoD 0.5 -riscritti molti canali per cambiamenti nella struttura stessa di kod -altre robe carine
This commit is contained in:
228
.github/ISSUE_TEMPLATE/test-canale.md
vendored
Normal file
228
.github/ISSUE_TEMPLATE/test-canale.md
vendored
Normal file
@@ -0,0 +1,228 @@
|
||||
---
|
||||
name: Test Canale
|
||||
about: Pagina per il test di un canale
|
||||
title: ''
|
||||
labels: Test Canale
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Di ogni test mantieni la voce dell'esito e cancella le altre, dove occorre aggiungi informazioni. Specifica, dove possibile, il tipo di problema che incontri in quel test.
|
||||
Se hai suggerimenti/consigli/dubbi sui test...Proponili e/o chiedi!
|
||||
|
||||
***
|
||||
|
||||
Test N°.1: Lista Canali
|
||||
|
||||
Cosa serve: il file .json
|
||||
|
||||
1. Verifica del canale nelle sezioni indicate nel file .json, voce "categories".
|
||||
|
||||
- [ ] Tutte
|
||||
- [ ] Alcune - Indicare le sezioni dove manca il canale
|
||||
- [ ] Nessuna - Voce Canale mancante nella lista. In questo caso non puoi continuare il test.
|
||||
|
||||
2. Icone del canale [ ]
|
||||
|
||||
- [ ] Presenti
|
||||
- [ ] Non Presenti
|
||||
|
||||
***
|
||||
|
||||
Test N°.2: Configura Canale
|
||||
|
||||
1. Presenza della voce "Configura Canale"
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
2. Voci presenti in Configura Canale
|
||||
|
||||
a. Cerca Informazioni extra (Default: Attivo)
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
b. Includi in Novità (Default: Attivo)
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
c. Includi in Novità - Italiano (Default: Attivo)
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
d. Includi in ricerca globale (Default: Attivo)
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
e. Verifica se i link esistono (Default: Attivo)
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
f. Numero de link da verificare (Default: 10)
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
g. Mostra link in lingua (Default: Non filtrare)
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
***
|
||||
|
||||
Test N°.3: Voci menu nella pagina del Canale
|
||||
|
||||
1. Configurazione Autoplay
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
2. Configurazione Canale
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
***
|
||||
|
||||
Test N°.4: Confronto Sito - Pagina Canale
|
||||
|
||||
Cosa serve: il file .py, consultare la def mainlist()
|
||||
|
||||
Promemoria:
|
||||
della mainlist la struttura è:
|
||||
|
||||
( 'Voce menu1', ['/url/', etc, etc])
|
||||
( 'Voce menu2', ['', etc, etc])
|
||||
Dove url è una stringa aggiuntiva da aggiungere all'url principale, se in url appare '' allora corrisponde all'indirizzo principale del sito.
|
||||
|
||||
Questo Test confronta i titoli che trovi accedendo alle voci di menu del canale con quello che vedi nella corrispettiva pagina del sito.
|
||||
|
||||
- [Voce menu con problemi - Tipo di problema] ( copiare per tutte le voci che non hanno corrispondenza )
|
||||
Tipo di problema = mancano dei titoli, i titoli sono errati, ai titoli corrispondono locandine errate o altro
|
||||
|
||||
|
||||
I test successivi sono divisi a seconda si tratta di film, serie tv o anime.
|
||||
Cancella le sezioni non interessate dal canale. Verificale dalla voce "categories" del file .json.
|
||||
|
||||
**Sezione FILM
|
||||
|
||||
Test da effettuare mentre sei nella pagina dei titoli. Per ogni titolo verfica ci siano le voci nel menu contestuale.
|
||||
|
||||
1. Aggiungi Film in videoteca
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
Aggiungi 2-3 titoli in videoteca. Verificheremo successivamente la videoteca.
|
||||
- [Aggiunti correttamente]
|
||||
- [Indica eventuali problemi] (copia-incolla per tutti i titoli con cui hai avuto il problema)
|
||||
|
||||
2. Scarica Film
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
3. Paginazione ( cliccare sulla voce "Successivo" e verifica la 2° pagina nello stesso modo in cui lo hai fatto per la 1°)
|
||||
|
||||
- [Ok]
|
||||
- [X - indica il tipo di problema]
|
||||
|
||||
4. Cerca o Cerca Film...
|
||||
Cerca un titolo a caso in KOD e lo stesso titolo sul sito. Confronta i risultati.
|
||||
|
||||
- [Ok]
|
||||
- [X - indica il tipo di problema]
|
||||
|
||||
5. Entra nella pagina del titolo, verifica che come ultima voce ci sia "Aggiungi in videoteca":
|
||||
|
||||
- [Si, appare]
|
||||
- [Non appare]
|
||||
|
||||
6. Eventuali problemi riscontrati
|
||||
- [ scrivi qui il problema/i ]
|
||||
|
||||
**Sezione Serie TV
|
||||
|
||||
Test da effettuare mentre sei nella pagina dei titoli. Per ogni titolo verfica ci siano le voci nel menu contestuale.
|
||||
|
||||
1. Aggiungi Serie in videoteca
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
2. Aggiungi 2-3 titoli in videoteca. Verificheremo successivamente la videoteca.
|
||||
- [Aggiunti correttamente]
|
||||
- [Indica eventuali problemi] (copia-incolla per tutti i titoli con cui hai avuto il problema)
|
||||
|
||||
3. Scarica Serie
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
4. Cerca o Cerca Serie...
|
||||
Cerca un titolo a caso in KOD e lo stesso titolo sul sito. Confronta i risultati.
|
||||
|
||||
- [Ok]
|
||||
- [X - indica il tipo di problema]
|
||||
|
||||
5. Entra nella pagina della serie, verifica che come ultima voce ci sia "Aggiungi in videoteca":
|
||||
|
||||
- [Non appare]
|
||||
- [Si, appare]
|
||||
|
||||
6. Entra nella pagina dell'episodio, NON deve apparire la voce "Aggiungi in videoteca":
|
||||
|
||||
- [Non appare]
|
||||
- [Si, appare]
|
||||
|
||||
7. Eventuali problemi riscontrati
|
||||
- [ scrivi qui il problema/i ]
|
||||
|
||||
**Sezione Anime
|
||||
|
||||
Test da effettuare mentre sei nella pagina dei titoli. Per ogni titolo verfica ci siano le voci nel menu contestuale.
|
||||
|
||||
1. Aggiungi Serie in videoteca
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
2. Aggiungi 2-3 titoli in videoteca. Verificheremo successivamente la videoteca.
|
||||
- [Aggiunti correttamente]
|
||||
- [Indica eventuali problemi] (copia-incolla per tutti i titoli con cui hai avuto il problema)
|
||||
|
||||
3. Scarica Serie
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
4. Rinumerazione
|
||||
|
||||
- [Si]
|
||||
- [No]
|
||||
|
||||
5. Cerca o Cerca Serie...
|
||||
Cerca un titolo a caso in KOD e lo stesso titolo sul sito. Confronta i risultati.
|
||||
|
||||
- [Ok]
|
||||
- [X - indica il tipo di problema]
|
||||
|
||||
6. Entra nella pagina della serie, verifica che come ultima voce ci sia "Aggiungi in videoteca":
|
||||
|
||||
- [Si, appare]
|
||||
- [Non appare]
|
||||
|
||||
7. Entra nella pagina dell'episodio, NON deve apparire la voce "Aggiungi in videoteca":
|
||||
|
||||
- [Non appare]
|
||||
- [Si, appare]
|
||||
|
||||
8. Eventuali problemi riscontrati
|
||||
- [ scrivi qui il problema/i ]
|
||||
|
||||
**Fine test del canale preso singolarmente!!!
|
||||
14
addon.xml
14
addon.xml
@@ -1,12 +1,12 @@
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<addon id="plugin.video.kod" name="Kodi on Demand" version="0.4.1" provider-name="KOD Team">
|
||||
<addon id="plugin.video.kod" name="Kodi on Demand" provider-name="KOD Team" version="0.5">
|
||||
<requires>
|
||||
<import addon="xbmc.python" version="2.1.0"/>
|
||||
<import addon="script.module.libtorrent" optional="true"/>
|
||||
<import addon="metadata.themoviedb.org"/>
|
||||
<import addon="metadata.tvdb.com"/>
|
||||
|
||||
</requires>
|
||||
<extension point="xbmc.python.pluginsource" library="default.py">
|
||||
<extension library="default.py" point="xbmc.python.pluginsource">
|
||||
<provides>video</provides>
|
||||
</extension>
|
||||
<extension point="xbmc.addon.metadata">
|
||||
@@ -19,7 +19,9 @@
|
||||
<screenshot>resources/media/themes/ss/2.png</screenshot>
|
||||
<screenshot>resources/media/themes/ss/3.png</screenshot>
|
||||
</assets>
|
||||
<news>Benvenuto su KOD!</news>
|
||||
<news>KoD 0.5
|
||||
-riscritti molti canali per cambiamenti nella struttura stessa di kod
|
||||
-altre robe carine</news>
|
||||
<description lang="it">Naviga velocemente sul web e guarda i contenuti presenti</description>
|
||||
<disclaimer>[COLOR red]The owners and submitters to this addon do not host or distribute any of the content displayed by these addons nor do they have any affiliation with the content providers.[/COLOR]
|
||||
[COLOR yellow]Kodi © is a registered trademark of the XBMC Foundation. We are not connected to or in any other way affiliated with Kodi, Team Kodi, or the XBMC Foundation. Furthermore, any software, addons, or products offered by us will receive no support in official Kodi channels, including the Kodi forums and various social networks.[/COLOR]</disclaimer>
|
||||
@@ -29,6 +31,6 @@
|
||||
<forum>https://t.me/kodiondemand</forum>
|
||||
<source>https://github.com/kodiondemand/addon</source>
|
||||
</extension>
|
||||
<extension point="xbmc.service" library="videolibrary_service.py" start="login|startup">
|
||||
<extension library="videolibrary_service.py" point="xbmc.service" start="login|startup">
|
||||
</extension>
|
||||
</addon>
|
||||
</addon>
|
||||
@@ -1,13 +1,13 @@
|
||||
{
|
||||
"altadefinizione01_club": "https://www.altadefinizione01.cc",
|
||||
"altadefinizione01_link": "http://altadefinizione1.link",
|
||||
"altadefinizione01": "https://www.altadefinizione01.cc",
|
||||
"altadefinizione01_club": "https://www.altadefinizione01.cc",
|
||||
"altadefinizione01_link": "http://altadefinizione1.com",
|
||||
"altadefinizioneclick": "https://altadefinizione.cloud",
|
||||
"altadefinizionehd": "https://altadefinizionetv.best",
|
||||
"animeforge": "https://ww1.animeforce.org",
|
||||
"animeforce": "https://ww1.animeforce.org",
|
||||
"animeleggendari": "https://animepertutti.com",
|
||||
"animespace": "http://www.animespace.tv",
|
||||
"animestream": "https://www.animeworld.it",
|
||||
"animespace": "https://www.animespace.tv",
|
||||
"animesubita": "http://www.animesubita.org",
|
||||
"animetubeita": "http://www.animetubeita.com",
|
||||
"animevision": "https://www.animevision.it",
|
||||
@@ -17,37 +17,29 @@
|
||||
"casacinemainfo": "https://www.casacinema.info",
|
||||
"cb01anime": "https://www.cineblog01.ink",
|
||||
"cinemalibero": "https://www.cinemalibero.best",
|
||||
"cinemastreaming": "https://cinemastreaming.icu",
|
||||
"documentaristreamingda": "https://documentari-streaming-da.com",
|
||||
"dreamsub": "https://www.dreamsub.stream",
|
||||
"eurostreaming": "https://eurostreaming.pink",
|
||||
"eurostreaming_video": "https://www.eurostreaming.best",
|
||||
"fastsubita": "http://fastsubita.com",
|
||||
"ffilms":"https://ffilms.org",
|
||||
"filmigratis": "https://filmigratis.net",
|
||||
"filmgratis": "https://www.filmaltadefinizione.net",
|
||||
"filmigratis": "https://filmigratis.org",
|
||||
"filmontv": "https://www.comingsoon.it",
|
||||
"filmpertutti": "https://www.filmpertutti.pub",
|
||||
"filmsenzalimiti": "https://filmsenzalimiti.best",
|
||||
"filmsenzalimiticc": "https://www.filmsenzalimiti.host",
|
||||
"filmsenzalimiti_blue": "https://filmsenzalimiti.best",
|
||||
"filmsenzalimiti_info": "https://www.filmsenzalimiti.host",
|
||||
"filmstreaming01": "https://filmstreaming01.com",
|
||||
"filmstreamingita": "http://filmstreamingita.live",
|
||||
"guarda_serie": "https://guardaserie.site",
|
||||
"guardafilm": "http://www.guardafilm.top",
|
||||
"guardarefilm": "https://www.guardarefilm.red",
|
||||
"guardaserie_stream": "https://guardaserie.co",
|
||||
"guardaseriecc": "https://guardaserie.site",
|
||||
"guardaserieclick": "https://www.guardaserie.media",
|
||||
"guardaserie_stream": "https://guardaserie.co",
|
||||
"guardaserieonline": "http://www.guardaserie.media",
|
||||
"guardogratis": "http://guardogratis.net",
|
||||
"ilgeniodellostreaming": "https://ilgeniodellostreaming.se",
|
||||
"italiafilm": "https://www.italia-film.pw",
|
||||
"italiafilmhd": "https://italiafilm.info",
|
||||
"italiaserie": "https://italiaserie.org",
|
||||
"itastreaming": "https://itastreaming.film",
|
||||
"majintoon": "https://toonitalia.org",
|
||||
"mondolunatico": "http://mondolunatico.org",
|
||||
"mondolunatico2": "http://mondolunatico.org/stream/",
|
||||
"mondoserietv": "https://mondoserietv.com",
|
||||
@@ -56,7 +48,9 @@
|
||||
"serietvonline": "https://serietvonline.tech",
|
||||
"serietvsubita": "http://serietvsubita.xyz",
|
||||
"serietvu": "https://www.serietvu.club",
|
||||
"streamingaltadefinizione": "https://www.streamingaltadefinizione.best",
|
||||
"streamingaltadefinizione": "https://www.streamingaltadefinizione.me",
|
||||
"streamtime": "https://t.me/s/StreamTime",
|
||||
"tantifilm": "https://www.tantifilm.eu",
|
||||
"toonitalia": "https://toonitalia.org"
|
||||
"toonitalia": "https://toonitalia.org",
|
||||
"vedohd": "https://vedohd.icu/"
|
||||
}
|
||||
|
||||
@@ -1,16 +1,14 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.likuoo.video'
|
||||
host = 'https://www.likuoo.video'
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
@@ -83,13 +81,20 @@ def lista(item):
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
itemlist = servertools.find_video_items(data=data)
|
||||
for videoitem in itemlist:
|
||||
videoitem.title = item.fulltitle
|
||||
videoitem.fulltitle = item.fulltitle
|
||||
videoitem.thumbnail = item.thumbnail
|
||||
videochannel=item.channel
|
||||
data = re.sub(r"\n|\r|\t|amp;|\s{2}| ", "", data)
|
||||
patron = 'url:\'([^\']+)\'.*?'
|
||||
patron += 'data:\'([^\']+)\''
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl,post in matches:
|
||||
post = post.replace("%3D", "=")
|
||||
scrapedurl = host + scrapedurl
|
||||
logger.debug( item.url +" , "+ scrapedurl +" , " +post )
|
||||
datas = httptools.downloadpage(scrapedurl, post=post, headers={'Referer':item.url}).data
|
||||
datas = datas.replace("\\", "")
|
||||
url = scrapertools.find_single_match(datas, '<iframe src="([^"]+)"')
|
||||
itemlist.append( Item(channel=item.channel, action="play", title = "%s", url=url ))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urllib
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.txxx.com'
|
||||
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.absoluporn.es'
|
||||
|
||||
@@ -90,7 +89,7 @@ def play(item):
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for servervideo,path,filee in matches:
|
||||
scrapedurl = servervideo + path + "56ea912c4df934c216c352fa8d623af3" + filee
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
|
||||
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
import base64
|
||||
|
||||
host = 'http://www.alsoporn.com'
|
||||
|
||||
@@ -66,8 +66,9 @@ def lista(item):
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot, contentTitle = scrapedtitle))
|
||||
if not "0:00" in scrapedtime:
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot, contentTitle = scrapedtitle))
|
||||
|
||||
next_page = scrapertools.find_single_match(data,'<li><a href="([^"]+)" target="_self"><span class="alsoporn_page">NEXT</span></a>')
|
||||
if next_page!="":
|
||||
@@ -81,11 +82,18 @@ def play(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
scrapedurl = scrapertools.find_single_match(data,'<iframe frameborder=0 scrolling="no" src=\'([^\']+)\'')
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = httptools.downloadpage(scrapedurl).data
|
||||
scrapedurl1 = scrapertools.find_single_match(data,'<iframe src="(.*?)"')
|
||||
scrapedurl1 = scrapedurl1.replace("//www.playercdn.com/ec/i2.php?", "https://www.trinitytube.xyz/ec/i2.php?")
|
||||
data = httptools.downloadpage(item.url).data
|
||||
scrapedurl2 = scrapertools.find_single_match(data,'<source src="(.*?)"')
|
||||
itemlist.append(item.clone(action="play", title=item.title, fulltitle = item.title, url=scrapedurl2))
|
||||
scrapedurl1 = scrapedurl1.replace("//www.playercdn.com/ec/i2.php?url=", "")
|
||||
scrapedurl1 = base64.b64decode(scrapedurl1 + "=")
|
||||
logger.debug(scrapedurl1)
|
||||
data = httptools.downloadpage(scrapedurl1).data
|
||||
if "xvideos" in scrapedurl1:
|
||||
scrapedurl2 = scrapertools.find_single_match(data, 'html5player.setVideoHLS\(\'([^\']+)\'\)')
|
||||
if "xhamster" in scrapedurl1:
|
||||
scrapedurl2 = scrapertools.find_single_match(data, '"[0-9]+p":"([^"]+)"').replace("\\", "")
|
||||
|
||||
logger.debug(scrapedurl2)
|
||||
itemlist.append(item.clone(action="play", title=item.title, url=scrapedurl2))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -3,59 +3,116 @@
|
||||
# Canale per altadefinizione01
|
||||
# ------------------------------------------------------------
|
||||
|
||||
from core import servertools, httptools, tmdb, scrapertoolsV2, support
|
||||
from core.item import Item
|
||||
from platformcode import logger, config
|
||||
from specials import autoplay
|
||||
|
||||
#URL che reindirizza sempre al dominio corrente
|
||||
#host = "https://altadefinizione01.to"
|
||||
"""
|
||||
DA FINIRE - CONTROLLARE
|
||||
|
||||
"""
|
||||
|
||||
##from specials import autoplay
|
||||
from core import support #,servertools
|
||||
from core.item import Item
|
||||
from platformcode import config, logger
|
||||
|
||||
__channel__ = "altadefinizione01"
|
||||
|
||||
host = config.get_channel_url(__channel__)
|
||||
|
||||
IDIOMAS = {'Italiano': 'IT'}
|
||||
list_language = IDIOMAS.values()
|
||||
list_servers = ['openload', 'streamango', 'rapidvideo', 'streamcherry', 'megadrive']
|
||||
headers = [['User-Agent', 'Mozilla/50.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
|
||||
['Referer', host]]
|
||||
|
||||
list_servers = ['verystream','openload','rapidvideo','streamango']
|
||||
list_quality = ['default']
|
||||
|
||||
checklinks = config.get_setting('checklinks', 'altadefinizione01')
|
||||
checklinks_number = config.get_setting('checklinks_number', 'altadefinizione01')
|
||||
|
||||
headers = [['Referer', host]]
|
||||
blacklist_categorie = ['Altadefinizione01', 'Altadefinizione.to']
|
||||
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
support.log()
|
||||
|
||||
itemlist =[]
|
||||
|
||||
support.menu(itemlist, 'Al Cinema','peliculas',host+'/cinema/')
|
||||
support.menu(itemlist, 'Ultimi Film Inseriti','peliculas',host)
|
||||
support.menu(itemlist, 'Film Sub-ITA','peliculas',host+'/sub-ita/')
|
||||
support.menu(itemlist, 'Film Ordine Alfabetico ','AZlist',host+'/catalog/')
|
||||
support.menu(itemlist, 'Categorie Film','categories',host)
|
||||
support.menu(itemlist, 'Cerca...','search')
|
||||
|
||||
autoplay.init(item.channel, list_servers, list_quality)
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
film = [
|
||||
('Al Cinema', ['/cinema/', 'peliculas', 'pellicola']),
|
||||
('Generi', ['', 'categorie', 'genres']),
|
||||
('Lettera', ['/catalog/a/', 'categorie', 'orderalf']),
|
||||
('Anni', ['', 'categorie', 'years']),
|
||||
('Sub-ITA', ['/sub-ita/', 'peliculas', 'pellicola'])
|
||||
]
|
||||
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
support.log('peliculas',item)
|
||||
## support.dbg()
|
||||
action="findvideos"
|
||||
if item.args == "search":
|
||||
patronBlock = r'</script> <div class="boxgrid caption">(?P<block>.*)<div id="right_bar">'
|
||||
else:
|
||||
patronBlock = r'<div class="cover_kapsul ml-mask">(?P<block>.*)<div class="page_nav">'
|
||||
patron = r'<div class="cover boxcaption"> <h2>.<a href="(?P<url>[^"]+)">.*?<.*?src="(?P<thumb>[^"]+)"'\
|
||||
'.+?[^>]+>[^>]+<div class="trdublaj"> (?P<quality>[A-Z/]+)<[^>]+>(?:.[^>]+>(?P<lang>.*?)<[^>]+>).*?'\
|
||||
'<p class="h4">(?P<title>.*?)</p>[^>]+> [^>]+> [^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+> [^>]+> '\
|
||||
'[^>]+>[^>]+>(?P<year>\d{4})[^>]+>[^>]+> [^>]+>[^>]+>(?P<duration>\d+).+?>.*?<p>(?P<plot>[^<]+)<'
|
||||
|
||||
def categories(item):
|
||||
support.log(item)
|
||||
itemlist = support.scrape(item,'<li><a href="([^"]+)">(.*?)</a></li>',['url','title'],headers,'Altadefinizione01',patron_block='<ul class="kategori_list">(.*?)</ul>',action='peliculas')
|
||||
return support.thumb(itemlist)
|
||||
patronNext = '<span>\d</span> <a href="([^"]+)">'
|
||||
|
||||
## support.regexDbg(item, patron, headers)
|
||||
|
||||
return locals()
|
||||
|
||||
def AZlist(item):
|
||||
support.log()
|
||||
return support.scrape(item,r'<a title="([^"]+)" href="([^"]+)"',['title','url'],headers,patron_block=r'<div class="movies-letter">(.*?)<\/div>',action='peliculas_list')
|
||||
@support.scrape
|
||||
def categorie(item):
|
||||
support.log('categorie',item)
|
||||
|
||||
if item.args != 'orderalf': action = "peliculas"
|
||||
else: action = 'orderalf'
|
||||
|
||||
blacklist = ['Altadefinizione01']
|
||||
|
||||
if item.args == 'genres':
|
||||
patronBlock = r'<ul class="kategori_list">(?P<block>.*)<div class="tab-pane fade" id="wtab2">'
|
||||
patron = '<li><a href="(?P<url>[^"]+)">(?P<title>.*?)</a>'
|
||||
elif item.args == 'years':
|
||||
patronBlock = r'<ul class="anno_list">(?P<block>.*)</a></li> </ul> </div>'
|
||||
patron = '<li><a href="(?P<url>[^"]+)">(?P<title>.*?)</a>'
|
||||
elif item.args == 'orderalf':
|
||||
patronBlock = r'<div class="movies-letter">(?P<block>.*)<div class="clearfix">'
|
||||
patron = '<a title=.*?href="(?P<url>[^"]+)"><span>(?P<title>.*?)</span>'
|
||||
|
||||
#support.regexDbg(item, patronBlock, headers)
|
||||
|
||||
return locals()
|
||||
|
||||
@support.scrape
|
||||
def orderalf(item):
|
||||
support.log('orderalf',item)
|
||||
|
||||
action= 'findvideos'
|
||||
patron = r'<td class="mlnh-thumb"><a href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"'\
|
||||
'.+?[^>]+>[^>]+ [^>]+[^>]+ [^>]+>(?P<title>[^<]+).*?[^>]+>(?P<year>\d{4})<'\
|
||||
'[^>]+>[^>]+>(?P<quality>[A-Z]+)[^>]+> <td class="mlnh-5">(?P<lang>.*?)</td>'
|
||||
patronNext = r'<span>[^<]+</span>[^<]+<a href="(.*?)">'
|
||||
|
||||
return locals()
|
||||
|
||||
def findvideos(item):
|
||||
support.log('findvideos', item)
|
||||
return support.server(item, headers=headers)
|
||||
|
||||
def search(item, text):
|
||||
logger.info("%s mainlist search log: %s %s" % (__channel__, item, text))
|
||||
itemlist = []
|
||||
text = text.replace(" ", "+")
|
||||
item.url = host + "/index.php?do=search&story=%s&subaction=search" % (text)
|
||||
item.args = "search"
|
||||
try:
|
||||
return peliculas(item)
|
||||
# Cattura la eccezione così non interrompe la ricerca globle se il canale si rompe!
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("search except %s: %s" % (__channel__, line))
|
||||
return []
|
||||
|
||||
def newest(categoria):
|
||||
# import web_pdb; web_pdb.set_trace()
|
||||
support.log(categoria)
|
||||
itemlist = []
|
||||
item = Item()
|
||||
@@ -67,7 +124,7 @@ def newest(categoria):
|
||||
|
||||
if itemlist[-1].action == "peliculas":
|
||||
itemlist.pop()
|
||||
# Continua la ricerca in caso di errore
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
@@ -75,76 +132,3 @@ def newest(categoria):
|
||||
return []
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
support.log(texto)
|
||||
item.url = "%s/index.php?do=search&story=%s&subaction=search" % (
|
||||
host, texto)
|
||||
try:
|
||||
return peliculas(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
def peliculas(item):
|
||||
support.log()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url, headers=headers).data
|
||||
patron = r'<div class="cover_kapsul ml-mask".*?<a href="(.*?)">(.*?)<\/a>.*?<img .*?src="(.*?)".*?<div class="trdublaj">(.*?)<\/div>.(<div class="sub_ita">(.*?)<\/div>|())'
|
||||
matches = scrapertoolsV2.find_multiple_matches(data, patron)
|
||||
|
||||
for scrapedurl, scrapedtitle, scrapedthumbnail, scrapedquality, subDiv, subText, empty in matches:
|
||||
info = scrapertoolsV2.find_multiple_matches(data, r'<span class="ml-label">([0-9]+)+<\/span>.*?<span class="ml-label">(.*?)<\/span>.*?<p class="ml-cat".*?<p>(.*?)<\/p>.*?<a href="(.*?)" class="ml-watch">')
|
||||
infoLabels = {}
|
||||
for infoLabels['year'], duration, scrapedplot, checkUrl in info:
|
||||
if checkUrl == scrapedurl:
|
||||
break
|
||||
|
||||
infoLabels['duration'] = int(duration.replace(' min', '')) * 60 # calcolo la durata in secondi
|
||||
scrapedthumbnail = host + scrapedthumbnail
|
||||
scrapedtitle = scrapertoolsV2.decodeHtmlentities(scrapedtitle)
|
||||
fulltitle = scrapedtitle
|
||||
if subDiv:
|
||||
fulltitle += support.typo(subText + ' _ () color limegreen')
|
||||
fulltitle += support.typo(scrapedquality.strip()+ ' _ [] color kod')
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="findvideos",
|
||||
contentType=item.contenType,
|
||||
contentTitle=scrapedtitle,
|
||||
contentQuality=scrapedquality.strip(),
|
||||
plot=scrapedplot,
|
||||
title=fulltitle,
|
||||
fulltitle=scrapedtitle,
|
||||
show=scrapedtitle,
|
||||
url=scrapedurl,
|
||||
infoLabels=infoLabels,
|
||||
thumbnail=scrapedthumbnail))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
support.nextPage(itemlist,item,data,'<span>[^<]+</span>[^<]+<a href="(.*?)">')
|
||||
|
||||
return itemlist
|
||||
|
||||
def peliculas_list(item):
|
||||
support.log()
|
||||
item.fulltitle = ''
|
||||
block = r'<tbody>(.*)<\/tbody>'
|
||||
patron = r'<a href="([^"]+)" title="([^"]+)".*?> <img.*?src="([^"]+)".*?<td class="mlnh-3">([0-9]{4}).*?mlnh-4">([A-Z]+)'
|
||||
return support.scrape(item,patron, ['url', 'title', 'thumb', 'year', 'quality'], patron_block=block)
|
||||
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
support.log()
|
||||
|
||||
itemlist = support.server(item, headers=headers)
|
||||
|
||||
return itemlist
|
||||
|
||||
@@ -11,14 +11,6 @@
|
||||
"movie"
|
||||
],
|
||||
"settings": [
|
||||
{
|
||||
"id": "channel_host",
|
||||
"type": "text",
|
||||
"label": "Host del canale",
|
||||
"default": "https://altadefinizione01.estate/",
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "modo_grafico",
|
||||
"type": "bool",
|
||||
|
||||
@@ -3,131 +3,94 @@
|
||||
# -*- Riscritto per KOD -*-
|
||||
# -*- By Greko -*-
|
||||
# -*- last change: 04/05/2019
|
||||
# -*- doppione di altadefinizione01
|
||||
|
||||
|
||||
from core import channeltools, servertools, support
|
||||
from specials import autoplay
|
||||
from core import servertools, support
|
||||
from core.item import Item
|
||||
from platformcode import config, logger
|
||||
from specials import autoplay
|
||||
|
||||
__channel__ = "altadefinizione01_club"
|
||||
host = config.get_channel_url(__channel__)
|
||||
|
||||
# ======== Funzionalità =============================
|
||||
|
||||
checklinks = config.get_setting('checklinks', __channel__)
|
||||
checklinks_number = config.get_setting('checklinks_number', __channel__)
|
||||
|
||||
headers = [['User-Agent', 'Mozilla/50.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
|
||||
['Referer', host]]
|
||||
|
||||
parameters = channeltools.get_channel_parameters(__channel__)
|
||||
fanart_host = parameters['fanart']
|
||||
thumbnail_host = parameters['thumbnail']
|
||||
|
||||
IDIOMAS = {'Italiano': 'IT'}
|
||||
list_language = IDIOMAS.values()
|
||||
list_servers = ['verystream','openload','supervideo','rapidvideo','streamango'] # per l'autoplay
|
||||
list_servers = ['verystream','openload','rapidvideo','streamango']
|
||||
list_quality = ['default']
|
||||
|
||||
|
||||
# =========== home menu ===================
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
"""
|
||||
Creo il menu principale del canale
|
||||
:param item:
|
||||
:return: itemlist []
|
||||
"""
|
||||
logger.info("%s mainlist log: %s" % (__channel__, item))
|
||||
itemlist = []
|
||||
|
||||
# Menu Principale
|
||||
support.menu(itemlist, 'Film Ultimi Arrivi bold', 'peliculas', host, args='pellicola')
|
||||
support.menu(itemlist, 'Genere', 'categorie', host, args='genres')
|
||||
support.menu(itemlist, 'Per anno submenu', 'categorie', host, args=['Film per Anno','years'])
|
||||
support.menu(itemlist, 'Per lettera', 'categorie', host + '/catalog/a/', args=['Film per Lettera','orderalf'])
|
||||
support.menu(itemlist, 'Al Cinema bold', 'peliculas', host + '/cinema/', args='pellicola')
|
||||
support.menu(itemlist, 'Sub-ITA bold', 'peliculas', host + '/sub-ita/', args='pellicola')
|
||||
support.menu(itemlist, 'Cerca film submenu', 'search', host, args = 'search')
|
||||
film = [
|
||||
('Al Cinema', ['/cinema/', 'peliculas', 'pellicola']),
|
||||
('Generi', ['', 'categorie', 'genres']),
|
||||
('Lettera', ['/catalog/a/', 'categorie', 'orderalf']),
|
||||
('Anni', ['', 'categorie', 'years']),
|
||||
('Sub-ITA', ['/sub-ita/', 'peliculas', 'pellicola'])
|
||||
]
|
||||
|
||||
autoplay.init(item.channel, list_servers, list_quality)
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
|
||||
support.channel_config(item, itemlist)
|
||||
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
# ======== def in ordine di menu ===========================
|
||||
# =========== def per vedere la lista dei film =============
|
||||
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
logger.info("%s mainlist peliculas log: %s" % (__channel__, item))
|
||||
itemlist = []
|
||||
## import web_pdb; web_pdb.set_trace()
|
||||
support.log('peliculas',item)
|
||||
|
||||
patron_block = r'<div id="dle-content">(.*?)<div class="page_nav">'
|
||||
action="findvideos"
|
||||
if item.args == "search":
|
||||
patron_block = r'</table> </form>(.*?)<div class="search_bg">'
|
||||
patron = r'<h2>.<a href="(.*?)".*?src="(.*?)".*?(?:|<div class="sub_ita">(.*?)</div>)[ ]</div>.*?<p class="h4">(.*?)</p>'
|
||||
patronBlock = r'</script> <div class="boxgrid caption">(?P<block>.*)<div id="right_bar">'
|
||||
else:
|
||||
patronBlock = r'<div class="cover_kapsul ml-mask">(?P<block>.*)<div class="page_nav">'
|
||||
patron = r'<div class="cover boxcaption"> <h2>.<a href="(?P<url>[^"]+)">.*?<.*?src="(?P<thumb>[^"]+)"'\
|
||||
'.+?[^>]+>[^>]+<div class="trdublaj"> (?P<quality>[A-Z]+)<[^>]+>(?:.[^>]+>(?P<lang>.*?)<[^>]+>).*?'\
|
||||
'<p class="h4">(?P<title>.*?)</p>[^>]+> [^>]+> [^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+> [^>]+> '\
|
||||
'[^>]+>[^>]+>(?P<year>\d{4})[^>]+>[^>]+> [^>]+>[^>]+>(?P<duration>\d+).+?>'
|
||||
|
||||
listGroups = ['url', 'thumb', 'lang', 'title', 'year']
|
||||
patronNext = '<span>\d</span> <a href="([^"]+)">'
|
||||
|
||||
patronNext = '<span>[^<]+</span>[^<]+<a href="(.*?)">'
|
||||
|
||||
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
|
||||
headers= headers, patronNext=patronNext,patron_block=patron_block,
|
||||
action='findvideos')
|
||||
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
# =========== def pagina categorie ======================================
|
||||
|
||||
@support.scrape
|
||||
def categorie(item):
|
||||
logger.info("%s mainlist categorie log: %s" % (__channel__, item))
|
||||
itemlist = []
|
||||
support.log('categorie',item)
|
||||
## import web_pdb; web_pdb.set_trace()
|
||||
|
||||
if item.args != 'orderalf': action = "peliculas"
|
||||
else: action = 'orderalf'
|
||||
blacklist = 'Altadefinizione01'
|
||||
|
||||
# da qui fare le opportuni modifiche
|
||||
patron = r'<li><a href="(.*?)">(.*?)</a>'
|
||||
action = 'peliculas'
|
||||
if item.args == 'genres':
|
||||
bloque = r'<ul class="kategori_list">(.*?)</ul>'
|
||||
elif item.args[1] == 'years':
|
||||
bloque = r'<ul class="anno_list">(.*?)</ul>'
|
||||
elif item.args[1] == 'orderalf':
|
||||
bloque = r'<div class="movies-letter">(.*)<div class="clearfix">'
|
||||
patron = r'<a title=.*?href="(.*?)"><span>(.*?)</span>'
|
||||
action = 'orderalf'
|
||||
|
||||
listGroups = ['url', 'title']
|
||||
patronNext = ''
|
||||
|
||||
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
|
||||
headers= headers, patronNext=patronNext, patron_block = bloque,
|
||||
action=action)
|
||||
|
||||
return itemlist
|
||||
patronBlock = r'<ul class="kategori_list">(?P<block>.*)</ul>'
|
||||
patron = '<li><a href="(?P<url>[^"]+)">(?P<title>.*?)</a>'
|
||||
elif item.args == 'years':
|
||||
patronBlock = r'<ul class="anno_list">(?P<block>.*)</ul>'
|
||||
patron = '<li><a href="(?P<url>[^"]+)">(?P<title>.*?)</a>'
|
||||
elif item.args == 'orderalf':
|
||||
patronBlock = r'<div class="movies-letter">(.*)<div class="clearfix">'
|
||||
patron = '<a title=.*?href="(?P<url>[^"]+)"><span>(?P<title>.*?)</span>'
|
||||
|
||||
return locals()
|
||||
# =========== def pagina lista alfabetica ===============================
|
||||
|
||||
@support.scrape
|
||||
def orderalf(item):
|
||||
logger.info("%s mainlist orderalf log: %s" % (__channel__, item))
|
||||
itemlist = []
|
||||
support.log('orderalf',item)
|
||||
|
||||
listGroups = ['url', 'title', 'thumb', 'year', 'lang']
|
||||
patron = r'<td class="mlnh-thumb"><a href="(.*?)".title="(.*?)".*?src="(.*?)".*?mlnh-3">(.*?)<.*?"mlnh-5">.<(.*?)<td' #scrapertools.find_single_match(data, '<td class="mlnh-thumb"><a href="(.*?)".title="(.*?)".*?src="(.*?)".*?mlnh-3">(.*?)<.*?"mlnh-5">.<(.*?)<td')
|
||||
patronNext = r'<span>[^<]+</span>[^<]+<a href="(.*?)">'
|
||||
|
||||
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
|
||||
headers= headers, patronNext=patronNext,
|
||||
action='findvideos')
|
||||
|
||||
return itemlist
|
||||
action= 'findvideos'
|
||||
patron = r'<td class="mlnh-thumb"><a href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"'\
|
||||
'.+?[^>]+>[^>]+ [^>]+[^>]+ [^>]+>(?P<title>[^<]+).*?[^>]+>(?P<year>\d{4})<'\
|
||||
'[^>]+>[^>]+>(?P<quality>[A-Z]+)[^>]+> <td class="mlnh-5">(?P<lang>.*?)</td>'
|
||||
patronNext = r'<span>[^<]+</span>[^<]+<a href="(.*?)">'
|
||||
|
||||
return locals()
|
||||
# =========== def pagina del film con i server per verderlo =============
|
||||
|
||||
def findvideos(item):
|
||||
logger.info("%s mainlist findvideos_film log: %s" % (__channel__, item))
|
||||
itemlist = []
|
||||
support.log('findvideos', item)
|
||||
return support.server(item, headers=headers)
|
||||
|
||||
# =========== def per cercare film/serietv =============
|
||||
@@ -137,7 +100,7 @@ def search(item, text):
|
||||
itemlist = []
|
||||
text = text.replace(" ", "+")
|
||||
item.url = host + "/index.php?do=search&story=%s&subaction=search" % (text)
|
||||
#item.extra = "search"
|
||||
item.args = "search"
|
||||
try:
|
||||
return peliculas(item)
|
||||
# Cattura la eccezione così non interrompe la ricerca globle se il canale si rompe!
|
||||
@@ -150,16 +113,17 @@ def search(item, text):
|
||||
# =========== def per le novità nel menu principale =============
|
||||
|
||||
def newest(categoria):
|
||||
logger.info("%s mainlist newest log: %s" % (__channel__, categoria))
|
||||
support.log(categoria)
|
||||
itemlist = []
|
||||
item = Item()
|
||||
try:
|
||||
item.url = host
|
||||
item.action = "peliculas"
|
||||
itemlist = peliculas(item)
|
||||
if itemlist[-1].action == "peliculas":
|
||||
itemlist.pop()
|
||||
if categoria == "peliculas":
|
||||
item.url = host
|
||||
item.action = "peliculas"
|
||||
itemlist = peliculas(item)
|
||||
|
||||
if itemlist[-1].action == "peliculas":
|
||||
itemlist.pop()
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
|
||||
@@ -4,24 +4,11 @@
|
||||
"active": true,
|
||||
"adult": false,
|
||||
"language": ["ita"],
|
||||
"fanart": "https://altadefinizione01.estate/templates/Dark/img/nlogo.png",
|
||||
"thumbnail": "https://altadefinizione01.estate/templates/Dark/img/nlogo.png",
|
||||
"banner": "https://altadefinizione01.estate/templates/Dark/img/nlogo.png",
|
||||
"fix" : "reimpostato url e modificato file per KOD",
|
||||
"change_date": "2019-30-04",
|
||||
"categories": [
|
||||
"movie",
|
||||
"vosi"
|
||||
],
|
||||
"bannermenu": "altadefinizione01link.png",
|
||||
"banner": "altadefinizione01link.png",
|
||||
"fanart": "altadefinizione01link.png",
|
||||
"categories": ["movie","vosi"],
|
||||
"settings": [
|
||||
{
|
||||
"id": "channel_host",
|
||||
"type": "text",
|
||||
"label": "Host del canale",
|
||||
"default": "https://altadefinizione01.estate/",
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "modo_grafico",
|
||||
"type": "bool",
|
||||
|
||||
@@ -2,97 +2,78 @@
|
||||
# -*- Channel Altadefinizione01L Film - Serie -*-
|
||||
# -*- By Greko -*-
|
||||
|
||||
import channelselector
|
||||
from specials import autoplay
|
||||
from core import servertools, support, jsontools
|
||||
from core import servertools, support#, jsontools
|
||||
from core.item import Item
|
||||
from platformcode import config, logger
|
||||
|
||||
__channel__ = "altadefinizione01_link"
|
||||
|
||||
# ======== def per utility INIZIO ============================
|
||||
host = config.get_channel_url(__channel__)
|
||||
headers = [['Referer', host]]
|
||||
|
||||
list_servers = ['supervideo', 'streamcherry','rapidvideo', 'streamango', 'openload']
|
||||
list_quality = ['default']
|
||||
|
||||
host = config.get_setting("channel_host", __channel__)
|
||||
|
||||
headers = [['Referer', host]]
|
||||
# =========== home menu ===================
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
"""
|
||||
Creo il menu principale del canale
|
||||
:param item:
|
||||
:return: itemlist []
|
||||
"""
|
||||
support.log()
|
||||
itemlist = []
|
||||
## support.dbg()
|
||||
|
||||
# Menu Principale
|
||||
support.menu(itemlist, 'Novità bold', 'peliculas', host)
|
||||
support.menu(itemlist, 'Film per Genere', 'genres', host, args='genres')
|
||||
support.menu(itemlist, 'Film per Anno submenu', 'genres', host, args='years')
|
||||
support.menu(itemlist, 'Film per Qualità submenu', 'genres', host, args='quality')
|
||||
support.menu(itemlist, 'Al Cinema bold', 'peliculas', host + '/film-del-cinema')
|
||||
support.menu(itemlist, 'Popolari bold', 'peliculas', host + '/piu-visti.html')
|
||||
support.menu(itemlist, 'Mi sento fortunato bold', 'genres', host, args='lucky')
|
||||
support.menu(itemlist, 'Sub-ITA bold', 'peliculas', host + '/film-sub-ita/')
|
||||
support.menu(itemlist, 'Cerca film submenu', 'search', host)
|
||||
|
||||
# per autoplay
|
||||
autoplay.init(item.channel, list_servers, list_quality)
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
|
||||
support.channel_config(item, itemlist)
|
||||
film = [
|
||||
('Al Cinema', ['/film-del-cinema', 'peliculas', '']),
|
||||
('Generi', ['', 'genres', 'genres']),
|
||||
('Anni', ['', 'genres', 'years']),
|
||||
('Qualità', ['/piu-visti.html', 'genres', 'quality']),
|
||||
('Mi sento fortunato', ['/piu-visti.html', 'genres', 'lucky']),
|
||||
('Popolari', ['/piu-visti.html', 'peliculas', '']),
|
||||
('Sub-ITA', ['/film-sub-ita/', 'peliculas', ''])
|
||||
]
|
||||
|
||||
return itemlist
|
||||
## search = ''
|
||||
|
||||
return locals()
|
||||
|
||||
# ======== def in ordine di action dal menu ===========================
|
||||
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
support.log
|
||||
itemlist = []
|
||||
## support.dbg()
|
||||
support.log('peliculas',item)
|
||||
|
||||
patron = r'class="innerImage">.*?href="([^"]+)".*?src="([^"]+)"'\
|
||||
'.*?class="ml-item-title">([^<]+)</.*?class="ml-item-label"> (\d{4}) <'\
|
||||
'.*?class="ml-item-label">.*?class="ml-item-label ml-item-label-.+?"> '\
|
||||
'(.+?) </div>.*?class="ml-item-label"> (.+?) </'
|
||||
listGroups = ['url', 'thumb', 'title', 'year', 'quality', 'lang']
|
||||
## patron = r'class="innerImage">.*?href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"'\
|
||||
## '.*?class="ml-item-title">(?P<title>[^<]+)</.*?class="ml-item-label"> '\
|
||||
## '(?P<year>\d{4}) <.*?class="ml-item-label"> (?P<duration>\d+) .*?'\
|
||||
## 'class="ml-item-label ml-item-label-.+?"> (?P<quality>.+?) <.*?'\
|
||||
## 'class="ml-itelangm-label"> (?P<lang>.+?) </'
|
||||
|
||||
patron = r'class="innerImage">.*?href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+> (?P<year>\d{4})[^>]+>[^>]+> (?P<duration>\d+)[^>]+>[^>]+> (?P<quality>[a-zA-Z\\]+)[^>]+>[^>]+> (?P<lang>.*?) [^>]+>'
|
||||
patronNext = '<span>\d</span> <a href="([^"]+)">'
|
||||
|
||||
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
|
||||
headers= headers, patronNext=patronNext,
|
||||
action='findvideos')
|
||||
|
||||
return itemlist
|
||||
## debug = True
|
||||
|
||||
return locals()
|
||||
|
||||
# =========== def pagina categorie ======================================
|
||||
|
||||
@support.scrape
|
||||
def genres(item):
|
||||
support.log
|
||||
itemlist = []
|
||||
#data = httptools.downloadpage(item.url, headers=headers).data
|
||||
support.log('genres',item)
|
||||
|
||||
action = 'peliculas'
|
||||
## item.contentType = 'movie'
|
||||
if item.args == 'genres':
|
||||
bloque = r'<ul class="listSubCat" id="Film">(.*?)</ul>'
|
||||
patronBlock = r'<ul class="listSubCat" id="Film">(?P<block>.*)</ul>'
|
||||
elif item.args == 'years':
|
||||
bloque = r'<ul class="listSubCat" id="Anno">(.*?)</ul>'
|
||||
patronBlock = r'<ul class="listSubCat" id="Anno">(?P<block>.*)</ul>'
|
||||
elif item.args == 'quality':
|
||||
bloque = r'<ul class="listSubCat" id="Qualita">(.*?)</ul>'
|
||||
patronBlock = r'<ul class="listSubCat" id="Qualita">(?P<block>.*)</ul>'
|
||||
elif item.args == 'lucky': # sono i titoli random nella pagina, cambiano 1 volta al dì
|
||||
bloque = r'FILM RANDOM.*?class="listSubCat">(.*?)</ul>'
|
||||
patronBlock = r'FILM RANDOM.*?class="listSubCat">(?P<block>.*)</ul>'
|
||||
action = 'findvideos'
|
||||
|
||||
patron = r'<li><a href="([^"]+)">(.*?)<'
|
||||
|
||||
listGroups = ['url','title']
|
||||
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
|
||||
headers= headers, patron_block = bloque,
|
||||
action=action)
|
||||
|
||||
return itemlist
|
||||
## item.args = ''
|
||||
patron = r'<li><a href="(?P<url>[^"]+)">(?P<title>[^<]+)<'
|
||||
|
||||
return locals()
|
||||
|
||||
# =========== def per cercare film/serietv =============
|
||||
#host+/index.php?do=search&story=avatar&subaction=search
|
||||
@@ -103,7 +84,7 @@ def search(item, text):
|
||||
item.url = host+"/index.php?do=search&story=%s&subaction=search" % (text)
|
||||
try:
|
||||
return peliculas(item)
|
||||
# Se captura la excepciÛn, para no interrumpir al buscador global si un canal falla
|
||||
# Se captura la excepcion, para no interrumpir al buscador global si un canal falla
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
@@ -134,14 +115,4 @@ def newest(categoria):
|
||||
return itemlist
|
||||
|
||||
def findvideos(item):
|
||||
support.log()
|
||||
|
||||
itemlist = support.server(item, headers=headers)
|
||||
|
||||
# Requerido para FilterTools
|
||||
# itemlist = filtertools.get_links(itemlist, item, list_language)
|
||||
|
||||
# Requerido para AutoPlay
|
||||
autoplay.start(itemlist, item)
|
||||
|
||||
return itemlist
|
||||
return support.server(item, headers=headers)
|
||||
|
||||
@@ -3,41 +3,60 @@
|
||||
# Canale per altadefinizioneclick
|
||||
# ----------------------------------------------------------
|
||||
|
||||
import re
|
||||
|
||||
from specials import autoplay
|
||||
from core import servertools, support
|
||||
from core.item import Item
|
||||
from platformcode import logger, config
|
||||
from specials import autoplay
|
||||
from platformcode import config, logger
|
||||
|
||||
#host = config.get_setting("channel_host", 'altadefinizioneclick')
|
||||
__channel__ = 'altadefinizioneclick'
|
||||
host = config.get_channel_url(__channel__)
|
||||
|
||||
IDIOMAS = {'Italiano': 'IT'}
|
||||
list_language = IDIOMAS.values()
|
||||
host = config.get_channel_url(__channel__)
|
||||
headers = [['Referer', host]]
|
||||
list_servers = ['verystream', 'openload', 'streamango', "vidoza", "thevideo", "okru", 'youtube']
|
||||
list_quality = ['1080p']
|
||||
|
||||
checklinks = config.get_setting('checklinks', 'altadefinizioneclick')
|
||||
checklinks_number = config.get_setting('checklinks_number', 'altadefinizioneclick')
|
||||
|
||||
headers = [['Referer', host]]
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
support.log()
|
||||
itemlist = []
|
||||
support.log()
|
||||
|
||||
support.menu(itemlist, 'Film', 'peliculas', host + "/nuove-uscite/")
|
||||
support.menu(itemlist, 'Per Genere submenu', 'menu', host, args='Film')
|
||||
support.menu(itemlist, 'Per Anno submenu', 'menu', host, args='Anno')
|
||||
support.menu(itemlist, 'Sub-ITA', 'peliculas', host + "/sub-ita/")
|
||||
support.menu(itemlist, 'Cerca...', 'search', host, 'movie')
|
||||
support.aplay(item, itemlist,list_servers, list_quality)
|
||||
support.channel_config(item, itemlist)
|
||||
film = '' #'/nuove-uscite/'
|
||||
filmSub = [
|
||||
('Novità', ['/nuove-uscite/', 'peliculas']),
|
||||
('Al Cinema', ['/film-del-cinema', 'peliculas']),
|
||||
('Generi', ['', 'menu', 'Film']),
|
||||
('Anni', ['', 'menu', 'Anno']),
|
||||
('Qualità', ['', 'menu', 'Qualita']),
|
||||
('Sub-ITA', ['/sub-ita/', 'peliculas'])
|
||||
]
|
||||
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
@support.scrape
|
||||
def menu(item):
|
||||
support.log()
|
||||
|
||||
action='peliculas'
|
||||
patron = r'<li><a href="(?P<url>[^"]+)">(?P<title>[^<]+)</a></li>'
|
||||
patronBlock= r'<ul class="listSubCat" id="'+ str(item.args) + '">(?P<block>.*)</ul>'
|
||||
|
||||
return locals()
|
||||
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
support.log()
|
||||
if item.extra == 'search':
|
||||
patron = r'<a href="(?P<url>[^"]+)">\s*<div class="wrapperImage">(?:<span class="hd">(?P<quality>[^<]+)'\
|
||||
'<\/span>)?<img[^s]+src="(?P<thumb>[^"]+)"[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)<[^<]+>'\
|
||||
'(?:.*?IMDB:\s(\2[^<]+)<\/div>)?'
|
||||
else:
|
||||
patron = r'<img width[^s]+src="(?P<thumb>[^"]+)[^>]+><\/a>.*?<a href="(?P<url>[^"]+)">(?P<title>[^(?:\]|<)]+)'\
|
||||
'(?:\[(?P<lang>[^\]]+)\])?<\/a>[^>]+>[^>]+>[^>]+>(?:\sIMDB\:\s(?P<rating>[^<]+)<)?'\
|
||||
'(?:.*?<span class="hd">(?P<quality>[^<]+)<\/span>)?\s*<a'
|
||||
|
||||
# in caso di CERCA si apre la maschera di inserimento dati
|
||||
patronNext = r'<a class="next page-numbers" href="([^"]+)">'
|
||||
|
||||
return locals()
|
||||
|
||||
def search(item, texto):
|
||||
support.log("search ", texto)
|
||||
@@ -77,36 +96,6 @@ def newest(categoria):
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def menu(item):
|
||||
support.log()
|
||||
itemlist = support.scrape(item, '<li><a href="([^"]+)">([^<]+)</a></li>', ['url', 'title'], headers, patron_block='<ul class="listSubCat" id="'+ str(item.args) + '">(.*?)</ul>', action='peliculas')
|
||||
return support.thumb(itemlist)
|
||||
|
||||
def peliculas(item):
|
||||
support.log()
|
||||
if item.extra == 'search':
|
||||
patron = r'<a href="([^"]+)">\s*<div class="wrapperImage">(?:<span class="hd">([^<]+)<\/span>)?<img[^s]+src="([^"]+)"[^>]+>[^>]+>[^>]+>([^<]+)<[^<]+>(?:.*?IMDB:\s([^<]+)<\/div>)?'
|
||||
elements = ['url', 'quality', 'thumb', 'title', 'rating']
|
||||
|
||||
else:
|
||||
patron = r'<img width[^s]+src="([^"]+)[^>]+><\/a>.*?<a href="([^"]+)">([^(?:\]|<)]+)(?:\[([^\]]+)\])?<\/a>[^>]+>[^>]+>[^>]+>(?:\sIMDB\:\s([^<]+)<)?(?:.*?<span class="hd">([^<]+)<\/span>)?\s*<a'
|
||||
elements =['thumb', 'url', 'title','lang', 'rating', 'quality']
|
||||
itemlist = support.scrape(item, patron, elements, headers, patronNext='<a class="next page-numbers" href="([^"]+)">')
|
||||
return itemlist
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
support.log()
|
||||
|
||||
itemlist = support.hdpass_get_servers(item)
|
||||
|
||||
if checklinks:
|
||||
itemlist = servertools.check_list_links(itemlist, checklinks_number)
|
||||
|
||||
# itemlist = filtertools.get_links(itemlist, item, list_language)
|
||||
|
||||
autoplay.start(itemlist, item)
|
||||
support.videolibrary(itemlist, item ,'color kod bold')
|
||||
|
||||
return itemlist
|
||||
support.log('findvideos', item)
|
||||
return support.hdpass_get_servers(item)
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
|
||||
host = 'https://www.analdin.com/es'
|
||||
|
||||
@@ -108,6 +108,6 @@ def play(item):
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl in matches:
|
||||
url = scrapedurl
|
||||
itemlist.append(item.clone(action="play", title=url, fulltitle = item.title, url=url))
|
||||
itemlist.append(item.clone(action="play", title=url, url=url))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -4,17 +4,17 @@
|
||||
"language": ["ita"],
|
||||
"active": true,
|
||||
"adult": false,
|
||||
"thumbnail": "http://www.animeforce.org/wp-content/uploads/2013/05/logo-animeforce.png",
|
||||
"banner": "http://www.animeforce.org/wp-content/uploads/2013/05/logo-animeforce.png",
|
||||
"thumbnail": "animeforce.png",
|
||||
"banner": "animeforce.png",
|
||||
"categories": ["anime"],
|
||||
"settings": [
|
||||
{
|
||||
"id": "include_in_global_search",
|
||||
"type": "bool",
|
||||
"label": "Incluir en busqueda global",
|
||||
"label": "Includi in Ricerca Globale",
|
||||
"default": false,
|
||||
"enabled": false,
|
||||
"visible": false
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "include_in_newest_anime",
|
||||
@@ -31,6 +31,39 @@
|
||||
"default": true,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "checklinks",
|
||||
"type": "bool",
|
||||
"label": "Verifica se i link esistono",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "checklinks_number",
|
||||
"type": "list",
|
||||
"label": "Numero de link da verificare",
|
||||
"default": 1,
|
||||
"enabled": true,
|
||||
"visible": "eq(-1,true)",
|
||||
"lvalues": [ "1", "3", "5", "10" ]
|
||||
},
|
||||
{
|
||||
"id": "autorenumber",
|
||||
"type": "bool",
|
||||
"label": "@70712",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "autorenumber_mode",
|
||||
"type": "bool",
|
||||
"label": "@70688",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": "eq(-1,true)"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,505 +1,132 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# ------------------------------------------------------------
|
||||
# Ringraziamo Icarus crew
|
||||
# Canale per http://animeinstreaming.net/
|
||||
# Canale per AnimeForce
|
||||
# ------------------------------------------------------------
|
||||
import re
|
||||
import urllib
|
||||
import urlparse
|
||||
|
||||
from core import httptools, scrapertools, servertools, tmdb
|
||||
from core.item import Item
|
||||
from platformcode import config, logger
|
||||
from servers.decrypters import adfly
|
||||
from core import support
|
||||
|
||||
__channel__ = "animeforge"
|
||||
host = config.get_channel_url(__channel__)
|
||||
__channel__ = "animeforce"
|
||||
host = support.config.get_channel_url(__channel__)
|
||||
|
||||
IDIOMAS = {'Italiano': 'IT'}
|
||||
list_language = IDIOMAS.values()
|
||||
list_servers = ['directo', 'openload']
|
||||
list_quality = ['default']
|
||||
|
||||
checklinks = support.config.get_setting('checklinks', __channel__)
|
||||
checklinks_number = support.config.get_setting('checklinks_number', __channel__)
|
||||
|
||||
headers = [['Referer', host]]
|
||||
|
||||
PERPAGE = 20
|
||||
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
log("mainlist", "mainlist", item.channel)
|
||||
itemlist = [Item(channel=item.channel,
|
||||
action="lista_anime",
|
||||
title="[COLOR azure]Anime [/COLOR]- [COLOR lightsalmon]Lista Completa[/COLOR]",
|
||||
url=host + "/lista-anime/",
|
||||
thumbnail=CategoriaThumbnail,
|
||||
fanart=CategoriaFanart),
|
||||
Item(channel=item.channel,
|
||||
action="animeaggiornati",
|
||||
title="[COLOR azure]Anime Aggiornati[/COLOR]",
|
||||
url=host,
|
||||
thumbnail=CategoriaThumbnail,
|
||||
fanart=CategoriaFanart),
|
||||
Item(channel=item.channel,
|
||||
action="ultimiep",
|
||||
title="[COLOR azure]Ultimi Episodi[/COLOR]",
|
||||
url=host,
|
||||
thumbnail=CategoriaThumbnail,
|
||||
fanart=CategoriaFanart),
|
||||
Item(channel=item.channel,
|
||||
action="search",
|
||||
title="[COLOR yellow]Cerca ...[/COLOR]",
|
||||
thumbnail="http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search")]
|
||||
anime = ['/lista-anime/',
|
||||
('In Corso',['/lista-anime-in-corso/']),
|
||||
('Ultimi Episodi',['','peliculas','update']),
|
||||
('Ultime Serie',['/category/anime/articoli-principali/','peliculas','last'])
|
||||
]
|
||||
return locals()
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
|
||||
def newest(categoria):
|
||||
log("newest", "newest" + categoria)
|
||||
support.log(categoria)
|
||||
itemlist = []
|
||||
item = Item()
|
||||
item = support.Item()
|
||||
try:
|
||||
if categoria == "anime":
|
||||
item.url = host
|
||||
item.action = "ultimiep"
|
||||
itemlist = ultimiep(item)
|
||||
item.args = 'update'
|
||||
itemlist = peliculas(item)
|
||||
|
||||
if itemlist[-1].action == "ultimiep":
|
||||
if itemlist[-1].action == "peliculas":
|
||||
itemlist.pop()
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("{0}".format(line))
|
||||
support.logger.error("{0}".format(line))
|
||||
return []
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
@support.scrape
|
||||
def search(item, texto):
|
||||
log("search", "search", item.channel)
|
||||
item.url = host + "/?s=" + texto
|
||||
try:
|
||||
return search_anime(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
search = texto
|
||||
item.contentType = 'tvshow'
|
||||
patron = '<strong><a href="(?P<url>[^"]+)">(?P<title>.*?) [Ss][Uu][Bb]'
|
||||
action = 'episodios'
|
||||
return locals()
|
||||
|
||||
|
||||
# =================================================================
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
anime = True
|
||||
if item.args == 'update':
|
||||
patron = r'src="(?P<thumb>[^"]+)" class="attachment-grid-post[^"]+" alt="[^"]*" title="(?P<title>[^"]+").*?<h2><a href="(?P<url>[^"]+)"'
|
||||
def itemHook(item):
|
||||
delete = support.scrapertoolsV2.find_single_match(item.fulltitle, r'( Episodio.*)')
|
||||
number = support.scrapertoolsV2.find_single_match(item.title, r'Episodio (\d+)')
|
||||
item.title = support.typo(number + ' - ','bold') + item.title.replace(delete,'')
|
||||
item.fulltitle = item.show = item.fulltitle.replace(delete,'')
|
||||
item.url = item.url.replace('-episodio-'+ number,'')
|
||||
item.number = number
|
||||
return item
|
||||
action = 'findvideos'
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def search_anime(item):
|
||||
log("search_anime", "search_anime", item.channel)
|
||||
itemlist = []
|
||||
elif item.args == 'last':
|
||||
patron = r'src="(?P<thumb>[^"]+)" class="attachment-grid-post[^"]+" alt="[^"]*" title="(?P<title>.*?)(?: Sub| sub| SUB|").*?<h2><a href="(?P<url>[^"]+)"'
|
||||
action = 'episodios'
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
else:
|
||||
pagination = ''
|
||||
patron = '<strong><a href="(?P<url>[^"]+)">(?P<title>.*?) [Ss][Uu][Bb]'
|
||||
action = 'episodios'
|
||||
|
||||
patron = r'<a href="([^"]+)"><img.*?src="([^"]+)".*?title="([^"]+)".*?/>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
|
||||
if "Sub Ita Download & Streaming" in scrapedtitle or "Sub Ita Streaming":
|
||||
if 'episodio' in scrapedtitle.lower():
|
||||
itemlist.append(episode_item(item, scrapedtitle, scrapedurl, scrapedthumbnail))
|
||||
else:
|
||||
scrapedtitle, eptype = clean_title(scrapedtitle, simpleClean=True)
|
||||
cleantitle, eptype = clean_title(scrapedtitle)
|
||||
|
||||
scrapedurl, total_eps = create_url(scrapedurl, cleantitle)
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="episodios",
|
||||
text_color="azure",
|
||||
contentType="tvshow",
|
||||
title=scrapedtitle,
|
||||
url=scrapedurl,
|
||||
fulltitle=cleantitle,
|
||||
show=cleantitle,
|
||||
thumbnail=scrapedthumbnail))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
# Next Page
|
||||
next_page = scrapertools.find_single_match(data, r'<link rel="next" href="([^"]+)"[^/]+/>')
|
||||
if next_page != "":
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="search_anime",
|
||||
text_bold=True,
|
||||
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
|
||||
url=next_page,
|
||||
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png"))
|
||||
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def animeaggiornati(item):
|
||||
log("animeaggiornati", "animeaggiornati", item.channel)
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url, headers=headers).data
|
||||
|
||||
patron = r'<img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+><a href="([^"]+)">([^<]+)</a>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
|
||||
if 'Streaming' in scrapedtitle:
|
||||
cleantitle, eptype = clean_title(scrapedtitle)
|
||||
|
||||
# Creazione URL
|
||||
scrapedurl, total_eps = create_url(scrapedurl, scrapedtitle)
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="episodios",
|
||||
text_color="azure",
|
||||
contentType="tvshow",
|
||||
title=cleantitle,
|
||||
url=scrapedurl,
|
||||
fulltitle=cleantitle,
|
||||
show=cleantitle,
|
||||
thumbnail=scrapedthumbnail))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def ultimiep(item):
|
||||
log("ultimiep", "ultimiep", item.channel)
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url, headers=headers).data
|
||||
|
||||
patron = r'<img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+><a href="([^"]+)">([^<]+)</a>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
|
||||
if 'Streaming' in scrapedtitle:
|
||||
itemlist.append(episode_item(item, scrapedtitle, scrapedurl, scrapedthumbnail))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def lista_anime(item):
|
||||
log("lista_anime", "lista_anime", item.channel)
|
||||
|
||||
itemlist = []
|
||||
|
||||
p = 1
|
||||
if '{}' in item.url:
|
||||
item.url, p = item.url.split('{}')
|
||||
p = int(p)
|
||||
|
||||
# Carica la pagina
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
# Estrae i contenuti
|
||||
patron = r'<li>\s*<strong>\s*<a\s*href="([^"]+?)">([^<]+?)</a>\s*</strong>\s*</li>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = ""
|
||||
for i, (scrapedurl, scrapedtitle) in enumerate(matches):
|
||||
if (p - 1) * PERPAGE > i: continue
|
||||
if i >= p * PERPAGE: break
|
||||
|
||||
# Pulizia titolo
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle).strip()
|
||||
cleantitle, eptype = clean_title(scrapedtitle, simpleClean=True)
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
extra=item.extra,
|
||||
action="episodios",
|
||||
text_color="azure",
|
||||
contentType="tvshow",
|
||||
title=cleantitle,
|
||||
url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail,
|
||||
fulltitle=cleantitle,
|
||||
show=cleantitle,
|
||||
plot=scrapedplot,
|
||||
folder=True))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
if len(matches) >= p * PERPAGE:
|
||||
scrapedurl = item.url + '{}' + str(p + 1)
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
extra=item.extra,
|
||||
action="lista_anime",
|
||||
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
|
||||
url=scrapedurl,
|
||||
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png",
|
||||
folder=True))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
@support.scrape
|
||||
def episodios(item):
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
patron = '<td style="[^"]*?">\s*.*?<strong>(.*?)</strong>.*?\s*</td>\s*<td style="[^"]*?">\s*<a href="([^"]+?)"[^>]+>\s*<img.*?src="([^"]+?)".*?/>\s*</a>\s*</td>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
vvvvid_videos = False
|
||||
for scrapedtitle, scrapedurl, scrapedimg in matches:
|
||||
if 'nodownload' in scrapedimg or 'nostreaming' in scrapedimg:
|
||||
continue
|
||||
if 'vvvvid' in scrapedurl.lower():
|
||||
if not vvvvid_videos: vvvvid_videos = True
|
||||
itemlist.append(Item(title='I Video VVVVID Non sono supportati', text_color="red"))
|
||||
continue
|
||||
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
|
||||
scrapedtitle = re.sub(r'<[^>]*?>', '', scrapedtitle)
|
||||
scrapedtitle = '[COLOR azure][B]' + scrapedtitle + '[/B][/COLOR]'
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="findvideos",
|
||||
contentType="episode",
|
||||
title=scrapedtitle,
|
||||
url=urlparse.urljoin(host, scrapedurl),
|
||||
fulltitle=scrapedtitle,
|
||||
show=scrapedtitle,
|
||||
plot=item.plot,
|
||||
fanart=item.fanart,
|
||||
thumbnail=item.thumbnail))
|
||||
|
||||
# Comandi di servizio
|
||||
if config.get_videolibrary_support() and len(itemlist) != 0 and not vvvvid_videos:
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
title=config.get_localized_string(30161),
|
||||
text_color="yellow",
|
||||
text_bold=True,
|
||||
url=item.url,
|
||||
action="add_serie_to_library",
|
||||
extra="episodios",
|
||||
show=item.show))
|
||||
|
||||
return itemlist
|
||||
anime = True
|
||||
patron = r'<td style[^>]+>\s*.*?(?:<span[^>]+)?<strong>(?P<title>[^<]+)<\/strong>.*?<td style[^>]+>\s*<a href="(?P<url>[^"]+)"[^>]+>'
|
||||
def itemHook(item):
|
||||
item.url = item.url.replace(host, '')
|
||||
return item
|
||||
action = 'findvideos'
|
||||
return locals()
|
||||
|
||||
|
||||
# ==================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def findvideos(item):
|
||||
logger.info("kod.animeforce findvideos")
|
||||
support.log(item)
|
||||
|
||||
itemlist = []
|
||||
|
||||
if item.number:
|
||||
item.url = support.match(item, r'<a href="([^"]+)"[^>]*>', patronBlock=r'Episodio %s(.*?)</tr>' % item.number)[0][0]
|
||||
|
||||
if 'http' not in item.url:
|
||||
if '//' in item.url[:2]:
|
||||
item.url = 'http:' + item.url
|
||||
elif host not in item.url:
|
||||
item.url = host + item.url
|
||||
|
||||
if 'adf.ly' in item.url:
|
||||
item.url = adfly.get_long_url(item.url)
|
||||
elif 'bit.ly' in item.url:
|
||||
item.url = support.httptools.downloadpage(item.url, only_headers=True, follow_redirects=False).headers.get("location")
|
||||
|
||||
if item.extra:
|
||||
data = httptools.downloadpage(item.url, headers=headers).data
|
||||
matches = support.match(item, r'button"><a href="([^"]+)"')[0]
|
||||
|
||||
blocco = scrapertools.find_single_match(data, r'%s(.*?)</tr>' % item.extra)
|
||||
url = scrapertools.find_single_match(blocco, r'<a href="([^"]+)"[^>]*>')
|
||||
if 'vvvvid' in url.lower():
|
||||
itemlist = [Item(title='I Video VVVVID Non sono supportati', text_color="red")]
|
||||
return itemlist
|
||||
if 'http' not in url: url = "".join(['https:', url])
|
||||
else:
|
||||
url = item.url
|
||||
for video in matches:
|
||||
itemlist.append(
|
||||
support.Item(channel=item.channel,
|
||||
action="play",
|
||||
title='diretto',
|
||||
url=video,
|
||||
server='directo'))
|
||||
|
||||
if 'adf.ly' in url:
|
||||
url = adfly.get_long_url(url)
|
||||
elif 'bit.ly' in url:
|
||||
url = httptools.downloadpage(url, only_headers=True, follow_redirects=False).headers.get("location")
|
||||
support.server(item, itemlist=itemlist)
|
||||
|
||||
if 'animeforce' in url:
|
||||
headers.append(['Referer', item.url])
|
||||
data = httptools.downloadpage(url, headers=headers).data
|
||||
itemlist.extend(servertools.find_video_items(data=data))
|
||||
|
||||
for videoitem in itemlist:
|
||||
videoitem.title = item.title + videoitem.title
|
||||
videoitem.fulltitle = item.fulltitle
|
||||
videoitem.show = item.show
|
||||
videoitem.thumbnail = item.thumbnail
|
||||
videoitem.channel = item.channel
|
||||
videoitem.contentType = item.contentType
|
||||
|
||||
url = url.split('&')[0]
|
||||
data = httptools.downloadpage(url, headers=headers).data
|
||||
patron = """<source\s*src=(?:"|')([^"']+?)(?:"|')\s*type=(?:"|')video/mp4(?:"|')>"""
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
headers.append(['Referer', url])
|
||||
for video in matches:
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title,
|
||||
url=video + '|' + urllib.urlencode(dict(headers)), folder=False))
|
||||
else:
|
||||
itemlist.extend(servertools.find_video_items(data=url))
|
||||
|
||||
for videoitem in itemlist:
|
||||
videoitem.title = item.title + videoitem.title
|
||||
videoitem.fulltitle = item.fulltitle
|
||||
videoitem.show = item.show
|
||||
videoitem.thumbnail = item.thumbnail
|
||||
videoitem.channel = item.channel
|
||||
videoitem.contentType = item.contentType
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# ==================================================================
|
||||
|
||||
# =================================================================
|
||||
# Funzioni di servizio
|
||||
# -----------------------------------------------------------------
|
||||
def scrapedAll(url="", patron=""):
|
||||
data = httptools.downloadpage(url).data
|
||||
MyPatron = patron
|
||||
matches = re.compile(MyPatron, re.DOTALL).findall(data)
|
||||
scrapertools.printMatches(matches)
|
||||
|
||||
return matches
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def create_url(url, title, eptype=""):
|
||||
logger.info()
|
||||
|
||||
if 'download' not in url:
|
||||
url = url.replace('-streaming', '-download-streaming')
|
||||
|
||||
total_eps = ""
|
||||
if not eptype:
|
||||
url = re.sub(r'episodio?-?\d+-?(?:\d+-|)[oav]*', '', url)
|
||||
else: # Solo se è un episodio passa
|
||||
total_eps = scrapertools.find_single_match(title.lower(), r'\((\d+)-(?:episodio|sub-ita)\)') # Questo numero verrà rimosso dall'url
|
||||
if total_eps: url = url.replace('%s-' % total_eps, '')
|
||||
url = re.sub(r'%s-?\d*-' % eptype.lower(), '', url)
|
||||
url = url.replace('-fine', '')
|
||||
|
||||
return url, total_eps
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def clean_title(title, simpleClean=False):
|
||||
logger.info()
|
||||
|
||||
title = title.replace("Streaming", "").replace("&", "")
|
||||
title = title.replace("Download", "")
|
||||
title = title.replace("Sub Ita", "")
|
||||
cleantitle = title.replace("#038;", "").replace("amp;", "").strip()
|
||||
|
||||
if '(Fine)' in title:
|
||||
cleantitle = cleantitle.replace('(Fine)', '').strip() + " (Fine)"
|
||||
eptype = ""
|
||||
if not simpleClean:
|
||||
if "episodio" in title.lower():
|
||||
eptype = scrapertools.find_single_match(title, "((?:Episodio?|OAV))")
|
||||
cleantitle = re.sub(r'%s\s*\d*\s*(?:\(\d+\)|)' % eptype, '', title).strip()
|
||||
|
||||
if 'episodio' not in eptype.lower():
|
||||
cleantitle = re.sub(r'Episodio?\s*\d+\s*(?:\(\d+\)|)\s*[\(OAV\)]*', '', cleantitle).strip()
|
||||
|
||||
if '(Fine)' in title:
|
||||
cleantitle = cleantitle.replace('(Fine)', '')
|
||||
|
||||
return cleantitle, eptype
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def episode_item(item, scrapedtitle, scrapedurl, scrapedthumbnail):
|
||||
scrapedtitle, eptype = clean_title(scrapedtitle, simpleClean=True)
|
||||
cleantitle, eptype = clean_title(scrapedtitle)
|
||||
|
||||
# Creazione URL
|
||||
scrapedurl, total_eps = create_url(scrapedurl, scrapedtitle, eptype)
|
||||
|
||||
epnumber = ""
|
||||
if 'episodio' in eptype.lower():
|
||||
epnumber = scrapertools.find_single_match(scrapedtitle.lower(), r'episodio?\s*(\d+)')
|
||||
eptype += ":? %s%s" % (epnumber, (r" \(%s\):?" % total_eps) if total_eps else "")
|
||||
|
||||
extra = "<tr>\s*<td[^>]+><strong>(?:[^>]+>|)%s(?:[^>]+>[^>]+>|[^<]*|[^>]+>)</strong>" % eptype
|
||||
item = Item(channel=item.channel,
|
||||
action="findvideos",
|
||||
contentType="tvshow",
|
||||
title=scrapedtitle,
|
||||
text_color="azure",
|
||||
url=scrapedurl,
|
||||
fulltitle=cleantitle,
|
||||
extra=extra,
|
||||
show=cleantitle,
|
||||
thumbnail=scrapedthumbnail)
|
||||
return item
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def scrapedSingle(url="", single="", patron=""):
|
||||
data = httptools.downloadpage(url).data
|
||||
paginazione = scrapertools.find_single_match(data, single)
|
||||
matches = re.compile(patron, re.DOTALL).findall(paginazione)
|
||||
scrapertools.printMatches(matches)
|
||||
|
||||
return matches
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def Crea_Url(pagina="1", azione="ricerca", categoria="", nome=""):
|
||||
# esempio
|
||||
# chiamate.php?azione=ricerca&cat=&nome=&pag=
|
||||
Stringa = host + "/chiamate.php?azione=" + azione + "&cat=" + categoria + "&nome=" + nome + "&pag=" + pagina
|
||||
log("crea_Url", Stringa)
|
||||
return Stringa
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
def log(funzione="", stringa="", canale=""):
|
||||
logger.debug("[" + canale + "].[" + funzione + "] " + stringa)
|
||||
|
||||
|
||||
# =================================================================
|
||||
|
||||
# =================================================================
|
||||
# riferimenti di servizio
|
||||
# -----------------------------------------------------------------
|
||||
AnimeThumbnail = "http://img15.deviantart.net/f81c/i/2011/173/7/6/cursed_candies_anime_poster_by_careko-d3jnzg9.jpg"
|
||||
AnimeFanart = "https://i.ytimg.com/vi/IAlbvyBdYdY/maxresdefault.jpg"
|
||||
CategoriaThumbnail = "http://static.europosters.cz/image/750/poster/street-fighter-anime-i4817.jpg"
|
||||
CategoriaFanart = "https://i.ytimg.com/vi/IAlbvyBdYdY/maxresdefault.jpg"
|
||||
CercaThumbnail = "http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search"
|
||||
CercaFanart = "https://i.ytimg.com/vi/IAlbvyBdYdY/maxresdefault.jpg"
|
||||
AvantiTxt = config.get_localized_string(30992)
|
||||
AvantiImg = "http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png"
|
||||
return itemlist
|
||||
@@ -7,72 +7,5 @@
|
||||
"thumbnail": "animepertutti.png",
|
||||
"bannermenu": "animepertutti.png",
|
||||
"categories": ["anime"],
|
||||
"settings": [
|
||||
{
|
||||
"id": "include_in_global_search",
|
||||
"type": "bool",
|
||||
"label": "Includi ricerca globale",
|
||||
"default": false,
|
||||
"enabled": false,
|
||||
"visible": false
|
||||
},
|
||||
{
|
||||
"id": "include_in_newest_anime",
|
||||
"type": "bool",
|
||||
"label": "Includi in Novità - Anime",
|
||||
"default": true,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "include_in_newest_italiano",
|
||||
"type": "bool",
|
||||
"label": "Includi in Novità - Italiano",
|
||||
"default": true,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "checklinks",
|
||||
"type": "bool",
|
||||
"label": "Verifica se i link esistono",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "checklinks_number",
|
||||
"type": "list",
|
||||
"label": "Numero de link da verificare",
|
||||
"default": 1,
|
||||
"enabled": true,
|
||||
"visible": "eq(-1,true)",
|
||||
"lvalues": [ "1", "3", "5", "10" ]
|
||||
},
|
||||
{
|
||||
"id": "filter_languages",
|
||||
"type": "list",
|
||||
"label": "Mostra link in lingua...",
|
||||
"default": 0,
|
||||
"enabled": true,
|
||||
"visible": true,
|
||||
"lvalues": ["Non filtrare", "IT"]
|
||||
},
|
||||
{
|
||||
"id": "autorenumber",
|
||||
"type": "bool",
|
||||
"label": "@70712",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "autorenumber_mode",
|
||||
"type": "bool",
|
||||
"label": "@70688",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": "eq(-1,true)"
|
||||
}
|
||||
]
|
||||
"settings": []
|
||||
}
|
||||
|
||||
@@ -3,184 +3,99 @@
|
||||
# Canale per animeleggendari
|
||||
# ------------------------------------------------------------
|
||||
|
||||
import re
|
||||
|
||||
from core import servertools, httptools, scrapertoolsV2, tmdb, support
|
||||
from core.item import Item
|
||||
from core.support import log, menu
|
||||
from core import support
|
||||
from lib.js2py.host import jsfunctions
|
||||
from platformcode import logger, config
|
||||
from specials import autoplay, autorenumber
|
||||
|
||||
__channel__ = "animeleggendari"
|
||||
host = config.get_channel_url(__channel__)
|
||||
host = support.config.get_channel_url(__channel__)
|
||||
|
||||
# Richiesto per Autoplay
|
||||
IDIOMAS = {'Italiano': 'IT'}
|
||||
list_language = IDIOMAS.values()
|
||||
list_servers = ['verystream', 'openload', 'streamango']
|
||||
headers = [['User-Agent', 'Mozilla/50.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
|
||||
['Referer', host]]
|
||||
|
||||
list_servers = ['verystream','openload','rapidvideo','streamango']
|
||||
list_quality = ['default']
|
||||
|
||||
checklinks = config.get_setting('checklinks', 'animeleggendari')
|
||||
checklinks_number = config.get_setting('checklinks_number', 'animeleggendari')
|
||||
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
log()
|
||||
|
||||
itemlist = []
|
||||
menu(itemlist, 'Anime Leggendari', 'peliculas', host + '/category/anime-leggendari/')
|
||||
menu(itemlist, 'Anime ITA', 'peliculas', host + '/category/anime-ita/')
|
||||
menu(itemlist, 'Anime SUB-ITA', 'peliculas', host + '/category/anime-sub-ita/')
|
||||
menu(itemlist, 'Anime Conclusi', 'peliculas', host + '/category/serie-anime-concluse/')
|
||||
menu(itemlist, 'Anime in Corso', 'peliculas', host + '/category/anime-in-corso/')
|
||||
menu(itemlist, 'Genere', 'genres', host)
|
||||
menu(itemlist, 'Cerca...', 'search')
|
||||
|
||||
autoplay.init(item.channel, list_servers, list_quality)
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
support.channel_config(item, itemlist)
|
||||
anime = [
|
||||
('Leggendari', ['/category/anime-leggendari/', 'peliculas']),
|
||||
('ITA', ['/category/anime-ita/', 'peliculas']),
|
||||
('SUB-ITA', ['/category/anime-sub-ita/', 'peliculas']),
|
||||
('Conclusi', ['/category/serie-anime-concluse/', 'peliculas']),
|
||||
('in Corso', ['/category/serie-anime-in-corso/', 'last_ep']),
|
||||
('Genere', ['', 'genres'])
|
||||
]
|
||||
|
||||
return locals()
|
||||
|
||||
return itemlist
|
||||
|
||||
def search(item, texto):
|
||||
log(texto)
|
||||
support.log(texto)
|
||||
|
||||
item.url = host + "/?s=" + texto
|
||||
try:
|
||||
return peliculas(item)
|
||||
|
||||
# Continua la ricerca in caso di errore
|
||||
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
support.logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
def last_ep(item):
|
||||
log('ANIME PER TUTTI')
|
||||
return support.scrape(item, '<a href="([^"]+)">([^<]+)<', ['url','title'],patron_block='<ul class="mh-tab-content-posts">(.*?)<\/ul>', action='findvideos')
|
||||
|
||||
def newest(categoria):
|
||||
log('ANIME PER TUTTI')
|
||||
log(categoria)
|
||||
itemlist = []
|
||||
item = Item()
|
||||
try:
|
||||
if categoria == "anime":
|
||||
item.url = host
|
||||
item.action = "last_ep"
|
||||
itemlist = last_ep(item)
|
||||
|
||||
if itemlist[-1].action == "last_ep":
|
||||
itemlist.pop()
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("{0}".format(line))
|
||||
return []
|
||||
|
||||
return itemlist
|
||||
|
||||
@support.scrape
|
||||
def genres(item):
|
||||
itemlist = support.scrape(item, '<a href="([^"]+)">([^<]+)<', ['url', 'title'], action='peliculas', patron_block=r'Generi.*?<ul.*?>(.*?)<\/ul>', blacklist=['Contattaci','Privacy Policy', 'DMCA'])
|
||||
return support.thumb(itemlist)
|
||||
blacklist = ['Contattaci','Privacy Policy', 'DMCA']
|
||||
patronMenu = r'<a href="(?P<url>[^"]+)">(?P<title>[^<]+)<'
|
||||
patronBlock = r'Generi</a>\s*<ul[^>]+>(?P<block>.*?)<\/ul>'
|
||||
action = 'peliculas'
|
||||
return locals()
|
||||
|
||||
def peliculas(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
anime = True
|
||||
blacklist = ['top 10 anime da vedere']
|
||||
matches, data = support.match(item, r'<a class="[^"]+" href="([^"]+)" title="([^"]+)"><img[^s]+src="([^"]+)"[^>]+')
|
||||
if item.url != host: patronBlock = r'<div id="main-content(?P<block>.*?)<aside'
|
||||
patron = r'<figure class="(?:mh-carousel-thumb|mh-posts-grid-thumb)"> <a class="[^"]+" href="(?P<url>[^"]+)" title="(?P<title>.*?)(?: \((?P<year>\d+)\))? (?:(?P<lang>SUB ITA|ITA))(?: (?P<title2>[Mm][Oo][Vv][Ii][Ee]))?[^"]*"><img[^s]+src="(?P<thumb>[^"]+)"[^>]+'
|
||||
def itemHook(item):
|
||||
if 'movie' in item.title.lower():
|
||||
item.title = support.re.sub(' - [Mm][Oo][Vv][Ii][Ee]|[Mm][Oo][Vv][Ii][Ee]','',item.title)
|
||||
item.title += support.typo('Movie','_ () bold')
|
||||
item.contentType = 'movie'
|
||||
item.action = 'findvideos'
|
||||
return item
|
||||
patronNext = r'<a class="next page-numbers" href="([^"]+)">'
|
||||
action = 'episodios'
|
||||
return locals()
|
||||
|
||||
for url, title, thumb in matches:
|
||||
title = scrapertoolsV2.decodeHtmlentities(title.strip()).replace("streaming", "")
|
||||
lang = scrapertoolsV2.find_single_match(title, r"((?:SUB ITA|ITA))")
|
||||
videoType = ''
|
||||
if 'movie' in title.lower():
|
||||
videoType = ' - (MOVIE)'
|
||||
if 'ova' in title.lower():
|
||||
videoType = ' - (OAV)'
|
||||
|
||||
cleantitle = title.replace(lang, "").replace('(Streaming & Download)', '').replace('( Streaming & Download )', '').replace('OAV', '').replace('OVA', '').replace('MOVIE', '').strip()
|
||||
|
||||
if not videoType :
|
||||
contentType="tvshow"
|
||||
action="episodios"
|
||||
else:
|
||||
contentType="movie"
|
||||
action="findvideos"
|
||||
|
||||
if not title.lower() in blacklist:
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action=action,
|
||||
contentType=contentType,
|
||||
title=support.typo(cleantitle + videoType, 'bold') + support.typo(lang,'_ [] color kod'),
|
||||
fulltitle=cleantitle,
|
||||
show=cleantitle,
|
||||
url=url,
|
||||
thumbnail=thumb))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
autorenumber.renumber(itemlist)
|
||||
support.nextPage(itemlist, item, data, r'<a class="next page-numbers" href="([^"]+)">')
|
||||
|
||||
return itemlist
|
||||
|
||||
@support.scrape
|
||||
def episodios(item):
|
||||
log()
|
||||
itemlist = []
|
||||
url = item.url
|
||||
anime = True
|
||||
patronBlock = r'(?:<p style="text-align: left;">|<div class="pagination clearfix">\s*)(?P<block>.*?)</span></a></div>'
|
||||
patron = r'(?:<a href="(?P<url>[^"]+)"[^>]+>)?<span class="pagelink">(?P<episode>\d+)</span>'
|
||||
def itemHook(item):
|
||||
if not item.url:
|
||||
item.url = url
|
||||
item.title = support.typo('Episodio ', 'bold') + item.title
|
||||
return item
|
||||
return locals()
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
block = scrapertoolsV2.find_single_match(data, r'(?:<p style="text-align: left;">|<div class="pagination clearfix">\s*)(.*?)</span></a></div>')
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action='findvideos',
|
||||
contentType='episode',
|
||||
title=support.typo('Episodio 1 bold'),
|
||||
fulltitle=item.title,
|
||||
url=item.url,
|
||||
thumbnail=item.thumbnail))
|
||||
|
||||
if block:
|
||||
matches = re.compile(r'<a href="([^"]+)".*?><span class="pagelink">(\d+)</span></a>', re.DOTALL).findall(data)
|
||||
for url, number in matches:
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action='findvideos',
|
||||
contentType='episode',
|
||||
title=support.typo('Episodio ' + number,'bold'),
|
||||
fulltitle=item.title,
|
||||
url=url,
|
||||
thumbnail=item.thumbnail))
|
||||
|
||||
autorenumber.renumber(itemlist, item)
|
||||
support.videolibrary
|
||||
return itemlist
|
||||
|
||||
def findvideos(item):
|
||||
log()
|
||||
support.log()
|
||||
data = ''
|
||||
matches = support.match(item, 'str="([^"]+)"')[0]
|
||||
if matches:
|
||||
for match in matches:
|
||||
data += str(jsfunctions.unescape(re.sub('@|g','%', match)))
|
||||
data += str(jsfunctions.unescape(support.re.sub('@|g','%', match)))
|
||||
data += str(match)
|
||||
log('DATA',data)
|
||||
if 'animepertutti' in data:
|
||||
log('ANIMEPERTUTTI!')
|
||||
|
||||
else:
|
||||
data = ''
|
||||
|
||||
itemlist = support.server(item,data)
|
||||
|
||||
if checklinks:
|
||||
itemlist = servertools.check_list_links(itemlist, checklinks_number)
|
||||
|
||||
# itemlist = filtertools.get_links(itemlist, item, list_language)
|
||||
autoplay.start(itemlist, item)
|
||||
|
||||
return itemlist
|
||||
return support.server(item,data)
|
||||
|
||||
@@ -8,6 +8,14 @@
|
||||
"banner": "animesaturn.png",
|
||||
"categories": ["anime"],
|
||||
"settings": [
|
||||
{
|
||||
"id": "modo_grafico",
|
||||
"type": "bool",
|
||||
"label": "Cerca informazioni extra",
|
||||
"default": true,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "channel_host",
|
||||
"type": "text",
|
||||
@@ -57,15 +65,6 @@
|
||||
"visible": "eq(-1,true)",
|
||||
"lvalues": [ "1", "3", "5", "10" ]
|
||||
},
|
||||
{
|
||||
"id": "filter_languages",
|
||||
"type": "list",
|
||||
"label": "Mostra link in lingua...",
|
||||
"default": 0,
|
||||
"enabled": true,
|
||||
"visible": true,
|
||||
"lvalues": ["Non filtrare","IT"]
|
||||
},
|
||||
{
|
||||
"id": "autorenumber",
|
||||
"type": "bool",
|
||||
|
||||
@@ -3,377 +3,117 @@
|
||||
# Canale per AnimeSaturn
|
||||
# Thanks to 4l3x87
|
||||
# ----------------------------------------------------------
|
||||
import re
|
||||
|
||||
import urlparse
|
||||
|
||||
import channelselector
|
||||
from core import httptools, tmdb, support, scrapertools, jsontools
|
||||
from core.item import Item
|
||||
from core.support import log
|
||||
from platformcode import logger, config
|
||||
from specials import autoplay, autorenumber
|
||||
from core import support
|
||||
|
||||
__channel__ = "animesaturn"
|
||||
host = config.get_setting("channel_host", __channel__)
|
||||
headers = [['Referer', host]]
|
||||
host = support.config.get_setting("channel_host", __channel__)
|
||||
headers={'X-Requested-With': 'XMLHttpRequest'}
|
||||
|
||||
IDIOMAS = {'Italiano': 'IT'}
|
||||
IDIOMAS = {'Italiano': 'ITA'}
|
||||
list_language = IDIOMAS.values()
|
||||
list_servers = ['openload', 'fembed', 'animeworld']
|
||||
list_quality = ['default', '480p', '720p', '1080p']
|
||||
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
log()
|
||||
itemlist = []
|
||||
support.menu(itemlist, 'Novità bold', 'ultimiep', "%s/fetch_pages.php?request=episodes" % host, 'tvshow')
|
||||
support.menu(itemlist, 'Anime bold', 'lista_anime', "%s/animelist?load_all=1" % host)
|
||||
support.menu(itemlist, 'Archivio A-Z submenu', 'list_az', '%s/animelist?load_all=1' % host, args=['tvshow', 'alfabetico'])
|
||||
support.menu(itemlist, 'Cerca', 'search', host)
|
||||
support.aplay(item, itemlist, list_servers, list_quality)
|
||||
support.channel_config(item, itemlist)
|
||||
|
||||
anime = ['/animelist?load_all=1',
|
||||
('Più Votati',['/toplist','menu', 'top']),
|
||||
('In Corso',['/animeincorso','peliculas','incorso']),
|
||||
('Ultimi Episodi',['/fetch_pages.php?request=episodes','peliculas','updated'])]
|
||||
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def cleantitle(scrapedtitle):
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle.strip())
|
||||
scrapedtitle = scrapedtitle.replace('[HD]', '').replace('’', '\'').replace('×', 'x').replace('"', "'")
|
||||
year = scrapertools.find_single_match(scrapedtitle, '\((\d{4})\)')
|
||||
if year:
|
||||
scrapedtitle = scrapedtitle.replace('(' + year + ')', '')
|
||||
|
||||
return scrapedtitle.strip()
|
||||
@support.scrape
|
||||
def search(item, texto):
|
||||
search = texto
|
||||
item.contentType = 'tvshow'
|
||||
patron = r'href="(?P<url>[^"]+)"[^>]+>[^>]+>(?P<title>[^<|(]+)(?:(?P<lang>\(([^\)]+)\)))?<|\)'
|
||||
action = 'check'
|
||||
return locals()
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def lista_anime(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
PERPAGE = 15
|
||||
|
||||
p = 1
|
||||
if '{}' in item.url:
|
||||
item.url, p = item.url.split('{}')
|
||||
p = int(p)
|
||||
|
||||
if '||' in item.url:
|
||||
series = item.url.split('\n\n')
|
||||
matches = []
|
||||
for i, serie in enumerate(series):
|
||||
matches.append(serie.split('||'))
|
||||
else:
|
||||
# Estrae i contenuti
|
||||
patron = r'<a href="([^"]+)"[^>]*?>[^>]*?>(.+?)<'
|
||||
matches = support.match(item, patron, headers=headers)[0]
|
||||
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = ""
|
||||
for i, (scrapedurl, scrapedtitle) in enumerate(matches):
|
||||
if (p - 1) * PERPAGE > i: continue
|
||||
if i >= p * PERPAGE: break
|
||||
title = cleantitle(scrapedtitle).replace('(ita)', '(ITA)')
|
||||
movie = False
|
||||
showtitle = title
|
||||
if '(ITA)' in title:
|
||||
title = title.replace('(ITA)', '').strip()
|
||||
showtitle = title
|
||||
else:
|
||||
title += ' ' + support.typo('Sub-ITA', '_ [] color kod')
|
||||
|
||||
infoLabels = {}
|
||||
if 'Akira' in title:
|
||||
movie = True
|
||||
infoLabels['year'] = 1988
|
||||
|
||||
if 'Dragon Ball Super Movie' in title:
|
||||
movie = True
|
||||
infoLabels['year'] = 2019
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
extra=item.extra,
|
||||
action="episodios" if movie == False else 'findvideos',
|
||||
title=title,
|
||||
url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail,
|
||||
fulltitle=showtitle,
|
||||
show=showtitle,
|
||||
contentTitle=showtitle,
|
||||
plot=scrapedplot,
|
||||
contentType='episode' if movie == False else 'movie',
|
||||
originalUrl=scrapedurl,
|
||||
infoLabels=infoLabels,
|
||||
folder=True))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
autorenumber.renumber(itemlist)
|
||||
|
||||
# Paginazione
|
||||
if len(matches) >= p * PERPAGE:
|
||||
support.nextPage(itemlist, item, next_page=(item.url + '{}' + str(p + 1)))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def episodios(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url, headers=headers, ignore_response_code=True).data
|
||||
anime_id = scrapertools.find_single_match(data, r'\?anime_id=(\d+)')
|
||||
# movie or series
|
||||
movie = scrapertools.find_single_match(data, r'\Episodi:</b>\s(\d*)\sMovie')
|
||||
|
||||
data = httptools.downloadpage(
|
||||
host + "/loading_anime?anime_id=" + anime_id,
|
||||
headers={
|
||||
'X-Requested-With': 'XMLHttpRequest'
|
||||
}).data
|
||||
|
||||
patron = r'<td style="[^"]+"><b><strong" style="[^"]+">(.+?)</b></strong></td>\s*'
|
||||
patron += r'<td style="[^"]+"><a href="([^"]+)"'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
|
||||
for scrapedtitle, scrapedurl in matches:
|
||||
scrapedtitle = cleantitle(scrapedtitle)
|
||||
scrapedtitle = re.sub(r'<[^>]*?>', '', scrapedtitle)
|
||||
scrapedtitle = '[B]' + scrapedtitle + '[/B]'
|
||||
|
||||
itemlist.append(
|
||||
Item(
|
||||
channel=item.channel,
|
||||
action="findvideos",
|
||||
contentType="episode",
|
||||
title=scrapedtitle,
|
||||
url=urlparse.urljoin(host, scrapedurl),
|
||||
fulltitle=scrapedtitle,
|
||||
show=scrapedtitle,
|
||||
plot=item.plot,
|
||||
fanart=item.thumbnail,
|
||||
thumbnail=item.thumbnail))
|
||||
|
||||
if ((len(itemlist) == 1 and 'Movie' in itemlist[0].title) or movie) and item.contentType != 'movie':
|
||||
item.url = itemlist[0].url
|
||||
item.contentType = 'movie'
|
||||
return findvideos(item)
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
autorenumber.renumber(itemlist, item)
|
||||
support.videolibrary(itemlist, item, 'bold color kod')
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def findvideos(item):
|
||||
log()
|
||||
originalItem = item
|
||||
|
||||
if item.contentType == 'movie':
|
||||
episodes = episodios(item)
|
||||
if len(episodes) > 0:
|
||||
item.url = episodes[0].url
|
||||
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url, headers=headers, ignore_response_code=True).data
|
||||
data = re.sub(r'\n|\t|\s+', ' ', data)
|
||||
patron = r'<a href="([^"]+)"><div class="downloadestreaming">'
|
||||
url = scrapertools.find_single_match(data, patron)
|
||||
data = httptools.downloadpage(url, headers=headers, ignore_response_code=True).data
|
||||
data = re.sub(r'\n|\t|\s+', ' ', data)
|
||||
itemlist = support.server(item, data=data)
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
def ultimiep(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
p = 1
|
||||
if '{}' in item.url:
|
||||
item.url, p = item.url.split('{}')
|
||||
p = int(p)
|
||||
|
||||
post = "page=%s" % p if p > 1 else None
|
||||
|
||||
data = httptools.downloadpage(
|
||||
item.url, post=post, headers={
|
||||
'X-Requested-With': 'XMLHttpRequest'
|
||||
}).data
|
||||
|
||||
patron = r"""<a href='[^']+'><div class="locandina"><img alt="[^"]+" src="([^"]+)" title="[^"]+" class="grandezza"></div></a>\s*"""
|
||||
patron += r"""<a href='([^']+)'><div class="testo">(.+?)</div></a>\s*"""
|
||||
patron += r"""<a href='[^']+'><div class="testo2">(.+?)</div></a>"""
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
for scrapedthumbnail, scrapedurl, scrapedtitle1, scrapedtitle2 in matches:
|
||||
scrapedtitle1 = cleantitle(scrapedtitle1)
|
||||
scrapedtitle2 = cleantitle(scrapedtitle2)
|
||||
scrapedtitle = scrapedtitle1 + ' - ' + scrapedtitle2 + ''
|
||||
|
||||
title = scrapedtitle
|
||||
showtitle = scrapedtitle
|
||||
if '(ITA)' in title:
|
||||
title = title.replace('(ITA)', '').strip()
|
||||
showtitle = title
|
||||
else:
|
||||
title += ' ' + support.typo('Sub-ITA', '_ [] color kod')
|
||||
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
contentType="episode",
|
||||
action="findvideos",
|
||||
title=title,
|
||||
url=scrapedurl,
|
||||
fulltitle=scrapedtitle1,
|
||||
show=showtitle,
|
||||
thumbnail=scrapedthumbnail))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
# Pagine
|
||||
patronvideos = r'data-page="(\d+)" title="Next">Pagina Successiva'
|
||||
next_page = scrapertools.find_single_match(data, patronvideos)
|
||||
if next_page:
|
||||
support.nextPage(itemlist, item, next_page=(item.url + '{}' + next_page))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def newest(categoria):
|
||||
log(categoria)
|
||||
support.log()
|
||||
itemlist = []
|
||||
item = Item()
|
||||
item.url = host
|
||||
item.extra = ''
|
||||
item = support.Item()
|
||||
try:
|
||||
if categoria == "anime":
|
||||
item.url = "%s/fetch_pages?request=episodios" % host
|
||||
item.action = "ultimiep"
|
||||
itemlist = ultimiep(item)
|
||||
|
||||
if itemlist[-1].action == "ultimiep":
|
||||
itemlist.pop()
|
||||
|
||||
# Continua la ricerca in caso di errore
|
||||
item.url = host + '/fetch_pages.php?request=episodes'
|
||||
item.args = "updated"
|
||||
return peliculas(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("{0}".format(line))
|
||||
support.logger.error("{0}".format(line))
|
||||
return []
|
||||
|
||||
return itemlist
|
||||
|
||||
@support.scrape
|
||||
def menu(item):
|
||||
patronMenu = r'u>(?P<title>[^<]+)<u>(?P<url>.*?)</div> </div>'
|
||||
action = 'peliculas'
|
||||
return locals()
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def search_anime(item, texto):
|
||||
log(texto)
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
anime = True
|
||||
if item.args == 'updated':
|
||||
post = "page=" + str(item.page if item.page else 1) if item.page > 1 else None
|
||||
page, data = support.match(item, r'data-page="(\d+)" title="Next">', post=post, headers=headers)
|
||||
patron = r'<img alt="[^"]+" src="(?P<thumb>[^"]+)" [^>]+></div></a>\s*<a href="(?P<url>[^"]+)"><div class="testo">(?P<title>[^\(<]+)(?:(?P<lang>\(([^\)]+)\)))?</div></a>\s*<a href="[^"]+"><div class="testo2">[^\d]+(?P<episode>\d+)</div></a>'
|
||||
if page: nextpage = page
|
||||
action = 'findvideos'
|
||||
elif item.args == 'top':
|
||||
data = item.url
|
||||
patron = r'<a href="(?P<url>[^"]+)">[^>]+>(?P<title>[^<\(]+)(?:\((?P<year>[^\)]+)\))?</div></a><div class="numero">(?P<title2>[^<]+)</div>.*?src="(?P<thumb>[^"]+)"'
|
||||
action = 'check'
|
||||
else:
|
||||
pagination = ''
|
||||
if item.args == 'incorso': patron = r'"slider_title" href="(?P<url>[^"]+)"><img src="(?P<thumb>[^"]+)"[^>]+>(?P<title>[^\(<]+)(?:\((?P<year>\d+)\))?</a>'
|
||||
else: patron = r'href="(?P<url>[^"]+)"[^>]+>[^>]+>(?P<title>[^<|(]+)(?:(?P<lang>\(([^\)]+)\)))?<|\)'
|
||||
action = 'check'
|
||||
return locals()
|
||||
|
||||
|
||||
def check(item):
|
||||
movie, data = support.match(item, r'Episodi:</b> (\d*) Movie')
|
||||
anime_id = support.match(data, r'anime_id=(\d+)')[0][0]
|
||||
item.url = host + "/loading_anime?anime_id=" + anime_id
|
||||
support.log('MOVIE= ', movie)
|
||||
if movie:
|
||||
item.contentType = 'movie'
|
||||
episodes = episodios(item)
|
||||
if len(episodes) > 0: item.url = episodes[0].url
|
||||
return findvideos(item)
|
||||
else:
|
||||
return episodios(item)
|
||||
|
||||
|
||||
@support.scrape
|
||||
def episodios(item):
|
||||
if item.contentType != 'movie': anime = True
|
||||
patron = r'<strong" style="[^"]+">(?P<title>[^<]+)</b></strong></td>\s*<td style="[^"]+"><a href="(?P<url>[^"]+)"'
|
||||
return locals()
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
support.log(item)
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(host + "/index.php?search=1&key=%s" % texto).data
|
||||
jsondata = jsontools.load(data)
|
||||
|
||||
for title in jsondata:
|
||||
data = str(httptools.downloadpage("%s/templates/header?check=1" % host, post="typeahead=%s" % title).data)
|
||||
|
||||
if 'Anime non esistente' in data:
|
||||
continue
|
||||
else:
|
||||
title = title.replace('(ita)', '(ITA)')
|
||||
showtitle = title
|
||||
if '(ITA)' in title:
|
||||
title = title.replace('(ITA)', '').strip()
|
||||
showtitle = title
|
||||
else:
|
||||
title += ' ' + support.typo('Sub-ITA', '_ [] color kod')
|
||||
|
||||
url = "%s/anime/%s" % (host, data)
|
||||
|
||||
itemlist.append(
|
||||
Item(
|
||||
channel=item.channel,
|
||||
contentType="episode",
|
||||
action="episodios",
|
||||
title=title,
|
||||
url=url,
|
||||
fulltitle=title,
|
||||
show=showtitle,
|
||||
thumbnail=""))
|
||||
|
||||
return itemlist
|
||||
url = support.match(item, r'<a href="([^"]+)"><div class="downloadestreaming">',headers=headers)[0]
|
||||
if url: item.url = url[0]
|
||||
return support.server(item)
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def search(item, texto):
|
||||
log(texto)
|
||||
itemlist = []
|
||||
|
||||
try:
|
||||
return search_anime(item, texto)
|
||||
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
def list_az(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
alphabet = dict()
|
||||
|
||||
# Articoli
|
||||
patron = r'<a href="([^"]+)"[^>]*?>[^>]*?>(.+?)<'
|
||||
matches = support.match(item, patron, headers=headers)[0]
|
||||
|
||||
for i, (scrapedurl, scrapedtitle) in enumerate(matches):
|
||||
letter = scrapedtitle[0].upper()
|
||||
if letter not in alphabet:
|
||||
alphabet[letter] = []
|
||||
alphabet[letter].append(scrapedurl + '||' + scrapedtitle)
|
||||
|
||||
for letter in sorted(alphabet):
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="lista_anime",
|
||||
url='\n\n'.join(alphabet[letter]),
|
||||
title=letter,
|
||||
fulltitle=letter))
|
||||
|
||||
return itemlist
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
@@ -31,6 +31,38 @@
|
||||
"default": true,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
}, {
|
||||
"id": "checklinks",
|
||||
"type": "bool",
|
||||
"label": "Verifica se i link esistono",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "checklinks_number",
|
||||
"type": "list",
|
||||
"label": "Numero de link da verificare",
|
||||
"default": 1,
|
||||
"enabled": true,
|
||||
"visible": "eq(-1,true)",
|
||||
"lvalues": [ "1", "3", "5", "10" ]
|
||||
},
|
||||
{
|
||||
"id": "autorenumber",
|
||||
"type": "bool",
|
||||
"label": "@70712",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"id": "autorenumber_mode",
|
||||
"type": "bool",
|
||||
"label": "@70688",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": "eq(-1,true)"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,65 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Ringraziamo Icarus crew
|
||||
# ------------------------------------------------------------
|
||||
# Ringraziamo Icarus crew
|
||||
# Canale per AnimeSubIta
|
||||
# ------------------------------------------------------------
|
||||
|
||||
import re
|
||||
import urllib
|
||||
import urlparse
|
||||
|
||||
from core import httptools, scrapertools, tmdb, support
|
||||
from core.item import Item
|
||||
from platformcode import logger, config
|
||||
from core import support
|
||||
|
||||
__channel__ = "animesubita"
|
||||
host = config.get_channel_url(__channel__)
|
||||
PERPAGE = 20
|
||||
host = support.config.get_channel_url(__channel__)
|
||||
headers = {'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0'}
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
list_servers = ['directo']
|
||||
list_quality = ['default']
|
||||
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = [Item(channel=item.channel,
|
||||
action="lista_anime_completa",
|
||||
title=support.color("Lista Anime", "azure"),
|
||||
url="%s/lista-anime/" % host,
|
||||
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
|
||||
Item(channel=item.channel,
|
||||
action="ultimiep",
|
||||
title=support.color("Ultimi Episodi", "azure"),
|
||||
url="%s/category/ultimi-episodi/" % host,
|
||||
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
|
||||
Item(channel=item.channel,
|
||||
action="lista_anime",
|
||||
title=support.color("Anime in corso", "azure"),
|
||||
url="%s/category/anime-in-corso/" % host,
|
||||
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
|
||||
Item(channel=item.channel,
|
||||
action="categorie",
|
||||
title=support.color("Categorie", "azure"),
|
||||
url="%s/generi/" % host,
|
||||
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
|
||||
Item(channel=item.channel,
|
||||
action="search",
|
||||
title=support.color("Cerca anime ...", "yellow"),
|
||||
extra="anime",
|
||||
thumbnail="http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search")
|
||||
]
|
||||
anime = ['/lista-anime/',
|
||||
('Ultimi Episodi',['/category/ultimi-episodi/', 'peliculas', 'updated']),
|
||||
('in Corso',['/category/anime-in-corso/', 'peliculas', 'alt']),
|
||||
('Generi',['/generi/', 'genres', 'alt'])]
|
||||
return locals()
|
||||
|
||||
return itemlist
|
||||
|
||||
# ================================================================================================================
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def newest(categoria):
|
||||
logger.info()
|
||||
support.log(categoria)
|
||||
itemlist = []
|
||||
item = Item()
|
||||
item = support.Item()
|
||||
try:
|
||||
if categoria == "anime":
|
||||
item.url = host
|
||||
item.action = "ultimiep"
|
||||
itemlist = ultimiep(item)
|
||||
item.args = "updated"
|
||||
itemlist = peliculas(item)
|
||||
|
||||
if itemlist[-1].action == "ultimiep":
|
||||
itemlist.pop()
|
||||
@@ -67,277 +39,92 @@ def newest(categoria):
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("{0}".format(line))
|
||||
support.logger.error("{0}".format(line))
|
||||
return []
|
||||
|
||||
return itemlist
|
||||
|
||||
# ================================================================================================================
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
support.log(texto)
|
||||
item.url = host + "/?s=" + texto
|
||||
item.args = 'alt'
|
||||
try:
|
||||
return lista_anime(item)
|
||||
return peliculas(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
support.logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def categorie(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = r'<li><a title="[^"]+" href="([^"]+)">([^<]+)</a>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
for scrapedurl, scrapedtitle in matches:
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="lista_anime",
|
||||
title=scrapedtitle.replace('Anime', '').strip(),
|
||||
text_color="azure",
|
||||
url=scrapedurl,
|
||||
thumbnail=item.thumbnail,
|
||||
folder=True))
|
||||
|
||||
return itemlist
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def ultimiep(item):
|
||||
logger.info("ultimiep")
|
||||
itemlist = lista_anime(item, False, False)
|
||||
|
||||
for itm in itemlist:
|
||||
title = scrapertools.decodeHtmlentities(itm.title)
|
||||
# Pulizia titolo
|
||||
title = title.replace("Streaming", "").replace("&", "")
|
||||
title = title.replace("Download", "")
|
||||
title = title.replace("Sub Ita", "").strip()
|
||||
eptype = scrapertools.find_single_match(title, "((?:Episodio?|OAV))")
|
||||
cleantitle = re.sub(r'%s\s*\d*\s*(?:\(\d+\)|)' % eptype, '', title).strip()
|
||||
# Creazione URL
|
||||
url = re.sub(r'%s-?\d*-' % eptype.lower(), '', itm.url)
|
||||
if "-streaming" not in url:
|
||||
url = url.replace("sub-ita", "sub-ita-streaming")
|
||||
|
||||
epnumber = ""
|
||||
if 'episodio' in eptype.lower():
|
||||
epnumber = scrapertools.find_single_match(title.lower(), r'episodio?\s*(\d+)')
|
||||
eptype += ":? " + epnumber
|
||||
|
||||
extra = "<tr>\s*<td[^>]+><strong>(?:[^>]+>|)%s(?:[^>]+>[^>]+>|[^<]*|[^>]+>)</strong>" % eptype
|
||||
itm.title = support.color(title, 'azure').strip()
|
||||
itm.action = "findvideos"
|
||||
itm.url = url
|
||||
itm.fulltitle = cleantitle
|
||||
itm.extra = extra
|
||||
itm.show = re.sub(r'Episodio\s*', '', title)
|
||||
itm.thumbnail = item.thumbnail
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
return itemlist
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def lista_anime(item, nextpage=True, show_lang=True):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
blocco = scrapertools.find_single_match(data, r'<div class="post-list group">(.*?)</nav><!--/.pagination-->')
|
||||
# patron = r'<a href="([^"]+)" title="([^"]+)">\s*<img[^s]+src="([^"]+)"[^>]+>' # Patron con thumbnail, Kodi non scarica le immagini dal sito
|
||||
patron = r'<a href="([^"]+)" title="([^"]+)">'
|
||||
matches = re.compile(patron, re.DOTALL).findall(blocco)
|
||||
|
||||
for scrapedurl, scrapedtitle in matches:
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
|
||||
scrapedtitle = re.sub(r'\s+', ' ', scrapedtitle)
|
||||
# Pulizia titolo
|
||||
scrapedtitle = scrapedtitle.replace("Streaming", "").replace("&", "")
|
||||
scrapedtitle = scrapedtitle.replace("Download", "")
|
||||
lang = scrapertools.find_single_match(scrapedtitle, r"([Ss][Uu][Bb]\s*[Ii][Tt][Aa])")
|
||||
scrapedtitle = scrapedtitle.replace("Sub Ita", "").strip()
|
||||
eptype = scrapertools.find_single_match(scrapedtitle, "((?:Episodio?|OAV))")
|
||||
cleantitle = re.sub(r'%s\s*\d*\s*(?:\(\d+\)|)' % eptype, '', scrapedtitle)
|
||||
@support.scrape
|
||||
def genres(item):
|
||||
blacklist= ['Anime In Corso','Ultimi Episodi']
|
||||
patronMenu=r'<li><a title="[^"]+" href="(?P<url>[^"]+)">(?P<title>[^<]+)</a>'
|
||||
action = 'peliculas'
|
||||
return locals()
|
||||
|
||||
|
||||
cleantitle = cleantitle.replace(lang, "").strip()
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
anime = True
|
||||
if item.args == 'updated':
|
||||
patron = r'<div class="post-thumbnail">\s*<a href="(?P<url>[^"]+)" title="(?P<title>.*?)\s*(?P<episode>Episodio \d+)[^"]+"[^>]*>\s*<img[^src]+src="(?P<thumb>[^"]+)"'
|
||||
patronNext = r'<link rel="next" href="([^"]+)"\s*/>'
|
||||
action = 'findvideos'
|
||||
elif item.args == 'alt':
|
||||
# debug = True
|
||||
patron = r'<div class="post-thumbnail">\s*<a href="(?P<url>[^"]+)" title="(?P<title>.*?)(?: [Oo][Aa][Vv])?(?:\s*(?P<lang>[Ss][Uu][Bb].[Ii][Tt][Aa]))[^"]+">\s*<img[^src]+src="(?P<thumb>[^"]+)"'
|
||||
patronNext = r'<link rel="next" href="([^"]+)"\s*/>'
|
||||
action = 'episodios'
|
||||
else:
|
||||
pagination = ''
|
||||
patronBlock = r'<ul class="lcp_catlist"[^>]+>(?P<block>.*?)</ul>'
|
||||
patron = r'<a href="(?P<url>[^"]+)"[^>]+>(?P<title>.*?)(?: [Oo][Aa][Vv])?(?:\s*(?P<lang>[Ss][Uu][Bb].[Ii][Tt][Aa])[^<]+)?</a>'
|
||||
action = 'episodios'
|
||||
return locals()
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="episodi",
|
||||
contentType="tvshow" if 'oav' not in scrapedtitle.lower() else "movie",
|
||||
title=scrapedtitle.replace(lang, "(%s)" % support.color(lang, "red") if show_lang else "").strip(),
|
||||
fulltitle=cleantitle,
|
||||
url=scrapedurl,
|
||||
show=cleantitle,
|
||||
folder=True))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
@support.scrape
|
||||
def episodios(item):
|
||||
anime = True
|
||||
patron = r'<td style="[^"]*?">\s*.*?<strong>(?P<episode>[^<]+)</strong>\s*</td>\s*<td[^>]+>\s*<a href="(?P<url>[^"]+)"[^>]+>\s*<img src="(?P<thumb>[^"]+?)"[^>]+>'
|
||||
return locals()
|
||||
|
||||
if nextpage:
|
||||
patronvideos = r'<link rel="next" href="([^"]+)"\s*/>'
|
||||
matches = re.compile(patronvideos, re.DOTALL).findall(data)
|
||||
|
||||
if len(matches) > 0:
|
||||
scrapedurl = matches[0]
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="lista_anime",
|
||||
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
|
||||
url=scrapedurl,
|
||||
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png",
|
||||
folder=True))
|
||||
|
||||
return itemlist
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def lista_anime_completa(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
p = 1
|
||||
if '{}' in item.url:
|
||||
item.url, p = item.url.split('{}')
|
||||
p = int(p)
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
blocco = scrapertools.find_single_match(data, r'<ul class="lcp_catlist"[^>]+>(.*?)</ul>')
|
||||
patron = r'<a href="([^"]+)"[^>]+>([^<]+)</a>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(blocco)
|
||||
|
||||
for i, (scrapedurl, scrapedtitle) in enumerate(matches):
|
||||
if (p - 1) * PERPAGE > i: continue
|
||||
if i >= p * PERPAGE: break
|
||||
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle.strip())
|
||||
cleantitle = scrapedtitle.replace("Sub Ita Streaming", "").replace("Ita Streaming", "")
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="episodi",
|
||||
contentType="tvshow" if 'oav' not in scrapedtitle.lower() else "movie",
|
||||
title=support.color(scrapedtitle, 'azure'),
|
||||
fulltitle=cleantitle,
|
||||
show=cleantitle,
|
||||
url=scrapedurl,
|
||||
folder=True))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
if len(matches) >= p * PERPAGE:
|
||||
scrapedurl = item.url + '{}' + str(p + 1)
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
extra=item.extra,
|
||||
action="lista_anime_completa",
|
||||
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
|
||||
url=scrapedurl,
|
||||
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png",
|
||||
folder=True))
|
||||
|
||||
return itemlist
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def episodi(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
patron = '<td style="[^"]*?">\s*.*?<strong>(.*?)</strong>.*?\s*</td>\s*<td style="[^"]*?">\s*<a href="([^"]+?)"[^>]+>\s*<img.*?src="([^"]+?)".*?/>\s*</a>\s*</td>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
for scrapedtitle, scrapedurl, scrapedimg in matches:
|
||||
if 'nodownload' in scrapedimg or 'nostreaming' in scrapedimg:
|
||||
continue
|
||||
if 'vvvvid' in scrapedurl.lower():
|
||||
itemlist.append(Item(title='I Video VVVVID Non sono supportati'))
|
||||
continue
|
||||
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
|
||||
scrapedtitle = re.sub(r'<[^>]*?>', '', scrapedtitle)
|
||||
scrapedtitle = '[COLOR azure][B]' + scrapedtitle + '[/B][/COLOR]'
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="findvideos",
|
||||
contentType="episode",
|
||||
title=scrapedtitle,
|
||||
url=urlparse.urljoin(host, scrapedurl),
|
||||
fulltitle=item.title,
|
||||
show=scrapedtitle,
|
||||
plot=item.plot,
|
||||
fanart=item.thumbnail,
|
||||
thumbnail=item.thumbnail))
|
||||
|
||||
# Comandi di servizio
|
||||
if config.get_videolibrary_support() and len(itemlist) != 0:
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
title="[COLOR lightblue]%s[/COLOR]" % config.get_localized_string(30161),
|
||||
url=item.url,
|
||||
action="add_serie_to_library",
|
||||
extra="episodios",
|
||||
show=item.show))
|
||||
|
||||
return itemlist
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def findvideos(item):
|
||||
logger.info()
|
||||
support.log(item)
|
||||
itemlist = []
|
||||
|
||||
headers = {'Upgrade-Insecure-Requests': '1',
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0'}
|
||||
if item.args == 'updated':
|
||||
ep = support.match(item.fulltitle,r'(Episodio\s*\d+)')[0][0]
|
||||
item.url = support.re.sub(r'episodio-\d+-|oav-\d+-', '',item.url)
|
||||
if 'streaming' not in item.url: item.url = item.url.replace('sub-ita','sub-ita-streaming')
|
||||
item.url = support.match(item, r'<a href="([^"]+)"[^>]+>', ep + '(.*?)</tr>', )[0][0]
|
||||
|
||||
if item.extra:
|
||||
data = httptools.downloadpage(item.url, headers=headers).data
|
||||
blocco = scrapertools.find_single_match(data, r'%s(.*?)</tr>' % item.extra)
|
||||
item.url = scrapertools.find_single_match(blocco, r'<a href="([^"]+)"[^>]+>')
|
||||
|
||||
patron = r'http:\/\/link[^a]+animesubita[^o]+org\/[^\/]+\/.*?(episodio\d*)[^p]+php(\?.*)'
|
||||
for phpfile, scrapedurl in re.findall(patron, item.url, re.DOTALL):
|
||||
url = "%s/%s.php%s" % (host, phpfile, scrapedurl)
|
||||
urls = support.match(item.url, r'(episodio\d*.php.*)')[0]
|
||||
for url in urls:
|
||||
url = host + '/' + url
|
||||
headers['Referer'] = url
|
||||
data = httptools.downloadpage(url, headers=headers).data
|
||||
# ------------------------------------------------
|
||||
data = support.match(item, headers=headers, url=url)[1]
|
||||
cookies = ""
|
||||
matches = re.compile('(.%s.*?)\n' % host.replace("http://", "").replace("www.", ""), re.DOTALL).findall(config.get_cookie_data())
|
||||
matches = support.re.compile('(.%s.*?)\n' % host.replace("http://", "").replace("www.", ""), support.re.DOTALL).findall(support.config.get_cookie_data())
|
||||
for cookie in matches:
|
||||
name = cookie.split('\t')[5]
|
||||
value = cookie.split('\t')[6]
|
||||
cookies += name + "=" + value + ";"
|
||||
headers['Cookie'] = cookies[:-1]
|
||||
# ------------------------------------------------
|
||||
scrapedurl = scrapertools.find_single_match(data, r'<source src="([^"]+)"[^>]+>')
|
||||
url = scrapedurl + '|' + urllib.urlencode(headers)
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="play",
|
||||
text_color="azure",
|
||||
title="[%s] %s" % (support.color("Diretto", "orange"), item.title),
|
||||
fulltitle=item.fulltitle,
|
||||
url=url,
|
||||
thumbnail=item.thumbnail,
|
||||
fanart=item.thumbnail,
|
||||
plot=item.plot))
|
||||
cookies += cookie.split('\t')[5] + "=" + cookie.split('\t')[6] + ";"
|
||||
|
||||
return itemlist
|
||||
headers['Cookie'] = cookies[:-1]
|
||||
|
||||
url = support.match(data, r'<source src="([^"]+)"[^>]+>')[0][0] + '|' + support.urllib.urlencode(headers)
|
||||
itemlist.append(
|
||||
support.Item(channel=item.channel,
|
||||
action="play",
|
||||
title='diretto',
|
||||
quality='',
|
||||
url=url,
|
||||
server='directo',
|
||||
fulltitle=item.fulltitle,
|
||||
show=item.show))
|
||||
|
||||
return support.server(item,url,itemlist)
|
||||
|
||||
@@ -23,57 +23,42 @@ list_servers = ['animeworld', 'verystream', 'streamango', 'openload', 'directo']
|
||||
list_quality = ['default', '480p', '720p', '1080p']
|
||||
|
||||
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
log()
|
||||
|
||||
itemlist =[]
|
||||
|
||||
support.menu(itemlist, 'ITA submenu bold', 'build_menu', host + '/filter?', args=["anime", 'language[]=1'])
|
||||
support.menu(itemlist, 'Sub-ITA submenu bold', 'build_menu', host + '/filter?', args=["anime", 'language[]=0'])
|
||||
support.menu(itemlist, 'Archivio A-Z submenu', 'alfabetico', host+'/az-list', args=["tvshow","a-z"])
|
||||
support.menu(itemlist, 'In corso submenu', 'video', host+'/', args=["in sala"])
|
||||
support.menu(itemlist, 'Generi submenu', 'generi', host+'/')
|
||||
support.menu(itemlist, 'Ultimi Aggiunti bold', 'video', host+'/newest', args=["anime"])
|
||||
support.menu(itemlist, 'Ultimi Episodi bold', 'video', host+'/updated', args=["novita'"])
|
||||
support.menu(itemlist, 'Cerca...', 'search')
|
||||
support.aplay(item, itemlist, list_servers, list_quality)
|
||||
support.channel_config(item, itemlist)
|
||||
return itemlist
|
||||
anime=['/filter',
|
||||
('ITA',['/filter?language%5B%5D=1&sort=2', 'build_menu', 'language[]=1']),
|
||||
('SUB-ITA',['/filter?language%5B%5D=1&sort=2', 'build_menu', 'language[]=0']),
|
||||
('In Corso', ['/ongoing', 'peliculas']),
|
||||
('Ultimi Episodi', ['/updated', 'peliculas', 'updated']),
|
||||
('Nuove Aggiunte',['/newest', 'peliculas' ]),
|
||||
('Generi',['','genres', '</i> Generi</a>'])]
|
||||
return locals()
|
||||
|
||||
# Crea menu dei generi =================================================
|
||||
# Crea menu ===================================================
|
||||
|
||||
def generi(item):
|
||||
log()
|
||||
patron_block = r'</i>\sGeneri</a>\s*<ul class="sub">(.*?)</ul>'
|
||||
patron = r'<a href="([^"]+)"\stitle="([^"]+)">'
|
||||
|
||||
return support.scrape(item, patron, ['url','title'], patron_block=patron_block, action='video')
|
||||
|
||||
|
||||
# Crea Menu Filtro ======================================================
|
||||
@support.scrape
|
||||
def genres(item):
|
||||
patronBlock = r'</i> Generi</a>(?P<block>.*?)</ul>'
|
||||
patronMenu = r'<a href="(?P<url>[^"]+)"\stitle="(?P<title>[^"]+)">'
|
||||
action = 'peliculas'
|
||||
return locals()
|
||||
|
||||
def build_menu(item):
|
||||
log()
|
||||
itemlist = []
|
||||
support.menu(itemlist, 'Tutti bold submenu', 'video', item.url+item.args[1])
|
||||
matches, data = support.match(item,r'<button class="btn btn-sm btn-default dropdown-toggle" data-toggle="dropdown"> (.*?) <span.*?>(.*?)<\/ul>',r'<form class="filters.*?>(.*?)<\/form>')
|
||||
log('ANIME DATA =' ,data)
|
||||
support.menuItem(itemlist, __channel__, 'Tutti bold', 'peliculas', item.url, 'tvshow' , args=item.args)
|
||||
matches = support.match(item,r'<button class="btn btn-sm btn-default dropdown-toggle" data-toggle="dropdown"> (.*?) <span.[^>]+>(.*?)</ul>',r'<form class="filters.*?>(.*?)</form>')[0]
|
||||
for title, html in matches:
|
||||
if title not in 'Lingua Ordine':
|
||||
support.menu(itemlist, title + ' submenu bold', 'build_sub_menu', html, args=item.args)
|
||||
log('ARGS= ', item.args[0])
|
||||
log('ARGS= ', html)
|
||||
support.menuItem(itemlist, __channel__, title + ' submenu bold', 'build_sub_menu', html, 'tvshow', args=item.args)
|
||||
return itemlist
|
||||
|
||||
# Crea SottoMenu Filtro ======================================================
|
||||
|
||||
def build_sub_menu(item):
|
||||
log()
|
||||
itemlist = []
|
||||
matches = re.compile(r'<input.*?name="([^"]+)" value="([^"]+)"\s*>[^>]+>([^<]+)<\/label>', re.DOTALL).findall(item.url)
|
||||
matches = support.re.compile(r'<input.*?name="([^"]+)" value="([^"]+)"\s*>[^>]+>([^<]+)<\/label>', re.DOTALL).findall(item.url)
|
||||
for name, value, title in matches:
|
||||
support.menu(itemlist, support.typo(title, 'bold'), 'video', host + '/filter?' + '&' + name + '=' + value + '&' + item.args[1])
|
||||
support.menuItem(itemlist, __channel__, support.typo(title, 'bold'), 'peliculas', host + '/filter?&' + name + '=' + value + '&' + item.args + '&sort=2', 'tvshow', args='sub')
|
||||
return itemlist
|
||||
|
||||
# Novità ======================================================
|
||||
@@ -84,12 +69,9 @@ def newest(categoria):
|
||||
item = Item()
|
||||
try:
|
||||
if categoria == "anime":
|
||||
item.url = host + '/newest'
|
||||
item.action = "video"
|
||||
itemlist = video(item)
|
||||
|
||||
if itemlist[-1].action == "video":
|
||||
itemlist.pop()
|
||||
item.url = host + '/updated'
|
||||
item.args = "updated"
|
||||
return peliculas(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
@@ -106,7 +88,7 @@ def search(item, texto):
|
||||
log(texto)
|
||||
item.url = host + '/search?keyword=' + texto
|
||||
try:
|
||||
return video(item)
|
||||
return peliculas(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
@@ -114,188 +96,50 @@ def search(item, texto):
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
# Scrapers ========================================================
|
||||
|
||||
# Lista A-Z ====================================================
|
||||
|
||||
def alfabetico(item):
|
||||
return support.scrape(item, '<a href="([^"]+)" title="([^"]+)">', ['url', 'title'], patron_block=r'<span>.*?A alla Z.<\/span>.*?<ul>(.*?)<\/ul>', action='lista_anime')
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
anime=True
|
||||
if item.args == 'updated':
|
||||
patron=r'<div class="inner">\s*<a href="(?P<url>[^"]+)" class[^>]+>\s*<img src="(?P<thumb>[^"]+)" alt?="(?P<title>[^\("]+)(?:\((?P<lang>[^\)]+)\))?"[^>]+>[^>]+>\s*(?:<div class="[^"]+">(?P<type>[^<]+)</div>)?[^>]+>[^>]+>\s*<div class="ep">[^\d]+(?P<episode>\d+)[^<]*</div>'
|
||||
action='findvideos'
|
||||
else:
|
||||
patron= r'<div class="inner">\s*<a href="(?P<url>[^"]+)" class[^>]+>\s*<img src="(?P<thumb>[^"]+)" alt?="(?P<title>[^\("]+)(?:\((?P<lang>[^\)]+)\))?"[^>]+>[^>]+>[^>]+>[^>]+>\s*(?:<div class="[^"]+">(?P<type>[^<]+)</div>)?'
|
||||
action='episodios'
|
||||
|
||||
|
||||
def lista_anime(item):
|
||||
log()
|
||||
itemlist = []
|
||||
matches ,data = support.match(item, r'<div class="item"><a href="([^"]+)".*?src="([^"]+)".*?data-jtitle="([^"]+)".*?>([^<]+)<\/a><p>(.*?)<\/p>')
|
||||
for scrapedurl, scrapedthumb, scrapedoriginal, scrapedtitle, scrapedplot in matches:
|
||||
|
||||
if scrapedoriginal == scrapedtitle:
|
||||
scrapedoriginal=''
|
||||
else:
|
||||
scrapedoriginal = support.typo(scrapedoriginal,' -- []')
|
||||
|
||||
year = ''
|
||||
lang = ''
|
||||
infoLabels = {}
|
||||
if '(' in scrapedtitle:
|
||||
year = scrapertoolsV2.find_single_match(scrapedtitle, r'(\([0-9]+\))')
|
||||
lang = scrapertoolsV2.find_single_match(scrapedtitle, r'(\([a-zA-Z]+\))')
|
||||
|
||||
infoLabels['year'] = year
|
||||
title = scrapedtitle.replace(year,'').replace(lang,'').strip()
|
||||
original = scrapedoriginal.replace(year,'').replace(lang,'').strip()
|
||||
if lang: lang = support.typo(lang,'_ color kod')
|
||||
longtitle = '[B]' + title + '[/B]' + lang + original
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
extra=item.extra,
|
||||
contentType="episode",
|
||||
action="episodios",
|
||||
title=longtitle,
|
||||
url=scrapedurl,
|
||||
thumbnail=scrapedthumb,
|
||||
fulltitle=title,
|
||||
show=title,
|
||||
infoLabels=infoLabels,
|
||||
plot=scrapedplot,
|
||||
folder=True))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
autorenumber.renumber(itemlist)
|
||||
|
||||
# Next page
|
||||
support.nextPage(itemlist, item, data, r'<a class="page-link" href="([^"]+)" rel="next"')
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def video(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
matches, data = support.match(item, r'<a href="([^"]+)" class[^>]+><img src="([^"]+)"(.*?)data-jtitle="([^"]+)" .*?>(.*?)<\/a>', '<div class="widget-body">(.*?)<div id="sidebar"', headers=headers)
|
||||
|
||||
for scrapedurl, scrapedthumb ,scrapedinfo, scrapedoriginal, scrapedtitle in matches:
|
||||
# Cerca Info come anno o lingua nel Titolo
|
||||
year = ''
|
||||
lang = ''
|
||||
if '(' in scrapedtitle:
|
||||
year = scrapertoolsV2.find_single_match(scrapedtitle, r'( \([0-9]+\))')
|
||||
lang = scrapertoolsV2.find_single_match(scrapedtitle, r'( \([a-zA-Z]+\))')
|
||||
|
||||
# Rimuove Anno e Lingua nel Titolo
|
||||
title = scrapedtitle.replace(year,'').replace(lang,'').strip()
|
||||
original = scrapedoriginal.replace(year,'').replace(lang,'').strip()
|
||||
|
||||
# Compara Il Titolo con quello originale
|
||||
if original == title:
|
||||
original=''
|
||||
else:
|
||||
original = support.typo(scrapedoriginal,'-- []')
|
||||
|
||||
# cerca info supplementari
|
||||
ep = ''
|
||||
ep = scrapertoolsV2.find_single_match(scrapedinfo, '<div class="ep">(.*?)<')
|
||||
if ep != '':
|
||||
ep = ' - ' + ep
|
||||
|
||||
ova = ''
|
||||
ova = scrapertoolsV2.find_single_match(scrapedinfo, '<div class="ova">(.*?)<')
|
||||
if ova != '':
|
||||
ova = ' - (' + ova + ')'
|
||||
|
||||
ona = ''
|
||||
ona = scrapertoolsV2.find_single_match(scrapedinfo, '<div class="ona">(.*?)<')
|
||||
if ona != '':
|
||||
ona = ' - (' + ona + ')'
|
||||
|
||||
movie = ''
|
||||
movie = scrapertoolsV2.find_single_match(scrapedinfo, '<div class="movie">(.*?)<')
|
||||
if movie != '':
|
||||
movie = ' - (' + movie + ')'
|
||||
|
||||
special = ''
|
||||
special = scrapertoolsV2.find_single_match(scrapedinfo, '<div class="special">(.*?)<')
|
||||
if special != '':
|
||||
special = ' - (' + special + ')'
|
||||
|
||||
|
||||
# Concatena le informazioni
|
||||
|
||||
lang = support.typo('Sub-ITA', '_ [] color kod') if '(ita)' not in lang.lower() else ''
|
||||
|
||||
info = ep + lang + year + ova + ona + movie + special
|
||||
|
||||
# Crea il title da visualizzare
|
||||
long_title = '[B]' + title + '[/B]' + info + original
|
||||
|
||||
# Controlla se sono Episodi o Film
|
||||
if movie == '':
|
||||
contentType = 'tvshow'
|
||||
action = 'episodios'
|
||||
else:
|
||||
contentType = 'movie'
|
||||
action = 'findvideos'
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
contentType=contentType,
|
||||
action=action,
|
||||
title=long_title,
|
||||
url=scrapedurl,
|
||||
fulltitle=title,
|
||||
show=title,
|
||||
thumbnail=scrapedthumb,
|
||||
context = autoplay.context,
|
||||
number= '1'))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
autorenumber.renumber(itemlist)
|
||||
|
||||
# Next page
|
||||
support.nextPage(itemlist, item, data, r'href="([^"]+)" rel="next"', resub=['&','&'])
|
||||
return itemlist
|
||||
patronNext=r'href="([^"]+)" rel="next"'
|
||||
type_content_dict={'movie':['movie']}
|
||||
type_action_dict={'findvideos':['movie']}
|
||||
return locals()
|
||||
|
||||
|
||||
@support.scrape
|
||||
def episodios(item):
|
||||
log()
|
||||
itemlist = []
|
||||
patron_block = r'server active(.*?)server hidden '
|
||||
patron = r'<li><a [^=]+="[^"]+"[^=]+="[^"]+"[^=]+="[^"]+"[^=]+="[^"]+"[^=]+="[^"]+" href="([^"]+)"[^>]+>([^<]+)<'
|
||||
matches = support.match(item, patron, patron_block)[0]
|
||||
|
||||
for scrapedurl, scrapedtitle in matches:
|
||||
itemlist.append(
|
||||
Item(
|
||||
channel=item.channel,
|
||||
action="findvideos",
|
||||
contentType="episode",
|
||||
title='[B] Episodio ' + scrapedtitle + '[/B]',
|
||||
url=urlparse.urljoin(host, scrapedurl),
|
||||
fulltitle=scrapedtitle,
|
||||
show=scrapedtitle,
|
||||
plot=item.plot,
|
||||
fanart=item.thumbnail,
|
||||
thumbnail=item.thumbnail,
|
||||
number=scrapedtitle))
|
||||
|
||||
autorenumber.renumber(itemlist, item, 'bold')
|
||||
support.videolibrary(itemlist, item)
|
||||
return itemlist
|
||||
anime=True
|
||||
patronBlock= r'server active(?P<block>.*?)server hidden '
|
||||
patron = r'<li><a [^=]+="[^"]+"[^=]+="[^"]+"[^=]+="[^"]+"[^=]+="[^"]+"[^=]+="[^"]+" href="(?P<url>[^"]+)"[^>]+>(?P<episode>[^<]+)<'
|
||||
def itemHook(item):
|
||||
log('FULLTITLE= ',item)
|
||||
item.title += support.typo(item.fulltitle,'-- bold')
|
||||
return item
|
||||
action='findvideos'
|
||||
return locals()
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
matches, data = support.match(item, r'class="tab.*?data-name="([0-9]+)">([^<]+)</span', headers=headers)
|
||||
log(item)
|
||||
itemlist = []
|
||||
matches, data = support.match(item, r'class="tab.*?data-name="([0-9]+)">', headers=headers)
|
||||
videoData = ''
|
||||
|
||||
for serverid, servername in matches:
|
||||
block = scrapertoolsV2.find_multiple_matches(data,'data-id="'+serverid+'">(.*?)<div class="server')
|
||||
log('ITEM= ',item)
|
||||
id = scrapertoolsV2.find_single_match(str(block),r'<a data-id="([^"]+)" data-base="'+item.number+'"')
|
||||
for serverid in matches:
|
||||
number = scrapertoolsV2.find_single_match(item.title,r'(\d+) -')
|
||||
block = scrapertoolsV2.find_multiple_matches(data,'data-id="' + serverid + '">(.*?)<div class="server')
|
||||
ID = scrapertoolsV2.find_single_match(str(block),r'<a data-id="([^"]+)" data-base="' + (number if number else '1') + '"')
|
||||
log('ID= ',ID)
|
||||
if id:
|
||||
dataJson = httptools.downloadpage('%s/ajax/episode/info?id=%s&server=%s&ts=%s' % (host, id, serverid, int(time.time())), headers=[['x-requested-with', 'XMLHttpRequest']]).data
|
||||
dataJson = httptools.downloadpage('%s/ajax/episode/info?id=%s&server=%s&ts=%s' % (host, ID, serverid, int(time.time())), headers=[['x-requested-with', 'XMLHttpRequest']]).data
|
||||
json = jsontools.load(dataJson)
|
||||
videoData +='\n'+json['grabber']
|
||||
|
||||
@@ -308,6 +152,7 @@ def findvideos(item):
|
||||
quality='',
|
||||
url=json['grabber'],
|
||||
server='directo',
|
||||
fulltitle=item.fulltitle,
|
||||
show=item.show,
|
||||
contentType=item.contentType,
|
||||
folder=False))
|
||||
|
||||
14
channels/beeg.json
Executable file
14
channels/beeg.json
Executable file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"id": "beeg",
|
||||
"name": "Beeg",
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "beeg.png",
|
||||
"banner": "beeg.png",
|
||||
"categories": [
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
]
|
||||
}
|
||||
122
channels/beeg.py
Executable file
122
channels/beeg.py
Executable file
@@ -0,0 +1,122 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import re
|
||||
import urllib
|
||||
|
||||
from core import jsontools as json
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from core import httptools
|
||||
|
||||
|
||||
url_api = ""
|
||||
Host = "https://beeg.com"
|
||||
|
||||
|
||||
def get_api_url():
|
||||
global url_api
|
||||
data = httptools.downloadpage(Host).data
|
||||
version = re.compile('var beeg_version = ([\d]+)').findall(data)[0]
|
||||
url_api = Host + "/api/v6/" + version
|
||||
|
||||
|
||||
get_api_url()
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
get_api_url()
|
||||
itemlist = []
|
||||
itemlist.append(Item(channel=item.channel, action="videos", title="Útimos videos", url=url_api + "/index/main/0/pc",
|
||||
viewmode="movie"))
|
||||
itemlist.append(Item(channel=item.channel, action="canal", title="Canal",
|
||||
url=url_api + "/channels"))
|
||||
itemlist.append(Item(channel=item.channel, action="listcategorias", title="Categorias",
|
||||
url=url_api + "/index/main/0/pc", extra="nonpopular"))
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="search", title="Buscar"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = url_api + "/index/tag/0/pc?tag=%s" % (texto)
|
||||
|
||||
try:
|
||||
return videos(item)
|
||||
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
def videos(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
JSONData = json.load(data)
|
||||
for Video in JSONData["videos"]:
|
||||
thumbnail = "http://img.beeg.com/236x177/" + str(Video["id"]) + ".jpg"
|
||||
url= '%s/video/%s?v=2&s=%s&e=%s' % (url_api, Video['svid'], Video['start'], Video['end'])
|
||||
title = Video["title"]
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail, plot="", show="",
|
||||
folder=True, contentType="movie"))
|
||||
# Paginador
|
||||
Actual = int(scrapertools.find_single_match(item.url, url_api + '/index/[^/]+/([0-9]+)/pc'))
|
||||
if JSONData["pages"] - 1 > Actual:
|
||||
scrapedurl = item.url.replace("/" + str(Actual) + "/", "/" + str(Actual + 1) + "/")
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="videos", title="Página Siguiente", url=scrapedurl, thumbnail="",
|
||||
viewmode="movie"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def listcategorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
JSONData = json.load(data)
|
||||
for Tag in JSONData["tags"]:
|
||||
url = url_api + "/index/tag/0/pc?tag=" + Tag["tag"]
|
||||
url = url.replace("%20", "-")
|
||||
title = '%s (%s)' % (str(Tag["tag"]), str(Tag["videos"]))
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="videos", title=title, url=url, viewmode="movie", type="item"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def canal(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
JSONData = json.load(data)
|
||||
for Tag in JSONData["channels"]:
|
||||
url = url_api + "/index/channel/0/pc?channel=" + Tag["channel"]
|
||||
url = url.replace("%20", "-")
|
||||
title = '%s (%s)' % (str(Tag["ps_name"]), str(Tag["videos"]))
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="videos", title=title, url=url, viewmode="movie", type="item"))
|
||||
return itemlist
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
JSONData = json.load(data)
|
||||
for key in JSONData:
|
||||
videourl = re.compile("([0-9]+p)", re.DOTALL).findall(key)
|
||||
if videourl:
|
||||
videourl = videourl[0]
|
||||
if not JSONData[videourl] == None:
|
||||
url = JSONData[videourl]
|
||||
url = url.replace("{DATA_MARKERS}", "data=pc.ES")
|
||||
if not url.startswith("https:"): url = "https:" + url
|
||||
title = videourl
|
||||
itemlist.append(["%s %s [directo]" % (title, url[-4:]), url])
|
||||
itemlist.sort(key=lambda item: item[0])
|
||||
return itemlist
|
||||
|
||||
@@ -1,14 +1,13 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.bravoporn.com'
|
||||
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.camwhoresbay.com'
|
||||
|
||||
@@ -66,7 +65,7 @@ def lista(item):
|
||||
for scrapedurl,scrapedtitle,scrapedthumbnail,scrapedtime in matches:
|
||||
url = urlparse.urljoin(item.url,scrapedurl)
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
|
||||
thumbnail = scrapedthumbnail
|
||||
thumbnail = "http:" + scrapedthumbnail + "|Referer=%s" % item.url
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail, plot=plot,
|
||||
contentTitle = scrapedtitle, fanart=thumbnail))
|
||||
@@ -108,7 +107,7 @@ def play(item):
|
||||
if scrapedurl == "" :
|
||||
scrapedurl = scrapertools.find_single_match(data, 'video_url: \'([^\']+)\'')
|
||||
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=scrapedurl, fulltitle=item.title, url=scrapedurl,
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=scrapedurl, url=scrapedurl,
|
||||
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo"))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import urlparse,re
|
||||
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
|
||||
host = "http://www.canalporno.com"
|
||||
|
||||
@@ -11,20 +11,21 @@ host = "http://www.canalporno.com"
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append(item.clone(action="findvideos", title="Útimos videos", url=host))
|
||||
itemlist.append(item.clone(action="categorias", title="Listado Categorias",
|
||||
itemlist.append(item.clone(action="lista", title="Útimos videos", url=host + "/ajax/homepage/?page=1"))
|
||||
itemlist.append(item.clone(action="categorias", title="Canal", url=host + "/ajax/list_producers/?page=1"))
|
||||
itemlist.append(item.clone(action="categorias", title="PornStar", url=host + "/ajax/list_pornstars/?page=1"))
|
||||
itemlist.append(item.clone(action="categorias", title="Categorias",
|
||||
url=host + "/categorias"))
|
||||
itemlist.append(item.clone(action="search", title="Buscar", url=host + "/search/?q=%s"))
|
||||
itemlist.append(item.clone(action="search", title="Buscar"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/ajax/new_search/?q=%s&page=1" % texto
|
||||
try:
|
||||
item.url = item.url % texto
|
||||
itemlist = findvideos(item)
|
||||
return sorted(itemlist, key=lambda it: it.title)
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
@@ -32,57 +33,60 @@ def search(item, texto):
|
||||
return []
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
patron = '<img src="([^"]+)".*?alt="([^"]+)".*?<h2><a href="([^"]+)">.*?' \
|
||||
'<div class="duracion"><span class="ico-duracion sprite"></span> ([^"]+) min</div>'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for thumbnail, title, url, time in matches:
|
||||
scrapedtitle = time + " - " + title
|
||||
scrapedurl = host + url
|
||||
scrapedthumbnail = thumbnail
|
||||
itemlist.append(item.clone(action="play", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail))
|
||||
|
||||
patron = '<div class="paginacion">.*?<span class="selected">.*?<a href="([^"]+)">([^"]+)</a>'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for url, title in matches:
|
||||
url = host + url
|
||||
title = "Página %s" % title
|
||||
itemlist.append(item.clone(action="findvideos", title=title, url=url))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
bloque = scrapertools.find_single_match(data, '<ul class="ordenar-por ordenar-por-categoria">'
|
||||
'(.*?)<\/ul>')
|
||||
if "pornstars" in item.url:
|
||||
patron = '<div class="muestra.*?href="([^"]+)".*?src=\'([^\']+)\'.*?alt="([^"]+)".*?'
|
||||
else:
|
||||
patron = '<div class="muestra.*?href="([^"]+)".*?src="([^"]+)".*?alt="([^"]+)".*?'
|
||||
if "Categorias" in item.title:
|
||||
patron += '<div class="numero">([^<]+)</div>'
|
||||
else:
|
||||
patron += '</span> (\d+) vídeos</div>'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for url, scrapedthumbnail, scrapedtitle, cantidad in matches:
|
||||
title= "%s [COLOR yellow] %s [/COLOR]" % (scrapedtitle, cantidad)
|
||||
url= url.replace("/videos-porno/", "/ajax/show_category/").replace("/sitio/", "/ajax/show_producer/").replace("/pornstar/", "/ajax/show_pornstar/")
|
||||
url = host + url + "?page=1"
|
||||
itemlist.append(item.clone(action="lista", title=title, url=url, thumbnail=scrapedthumbnail))
|
||||
if "/?page=" in item.url:
|
||||
next_page=item.url
|
||||
num= int(scrapertools.find_single_match(item.url,".*?/?page=(\d+)"))
|
||||
num += 1
|
||||
next_page = "?page=" + str(num)
|
||||
if next_page!="":
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append(item.clone(action="categorias", title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
return itemlist
|
||||
|
||||
#patron = '<div class="muestra-categorias">.*?<a class="thumb" href="([^"]+)".*?<img class="categorias" src="([^"]+)".*?<div class="nombre">([^"]+)</div>'
|
||||
patron = "<li><a href='([^']+)'\s?title='([^']+)'>.*?<\/a><\/li>"
|
||||
matches = scrapertools.find_multiple_matches(bloque, patron)
|
||||
for url, title in matches:
|
||||
url = host + url
|
||||
#thumbnail = "http:" + thumbnail
|
||||
itemlist.append(item.clone(action="findvideos", title=title, url=url))
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = 'data-src="([^"]+)" alt="([^"]+)".*?<h2><a href="([^"]+)">.*?' \
|
||||
'<div class="duracion"><span class="ico-duracion sprite"></span> ([^"]+) min</div>'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedthumbnail, scrapedtitle, scrapedurl, duration in matches:
|
||||
title = "[COLOR yellow] %s [/COLOR] %s" % (duration, scrapedtitle)
|
||||
url = host + scrapedurl
|
||||
itemlist.append(item.clone(action="play", title=title, url=url, thumbnail=scrapedthumbnail))
|
||||
last=scrapertools.find_single_match(item.url,'(.*?)page=\d+')
|
||||
num= int(scrapertools.find_single_match(item.url,".*?/?page=(\d+)"))
|
||||
num += 1
|
||||
next_page = "page=" + str(num)
|
||||
if next_page!="":
|
||||
next_page = last + next_page
|
||||
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
url = scrapertools.find_single_match(data, '<source src="([^"]+)"')
|
||||
itemlist.append(item.clone(url=url, server="directo"))
|
||||
|
||||
return itemlist
|
||||
|
||||
@@ -9,6 +9,6 @@
|
||||
"banner": "https://i.imgur.com/bXUyk6m.png",
|
||||
"categories": [
|
||||
"movie",
|
||||
"vo"
|
||||
"vos"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,7 +5,8 @@
|
||||
|
||||
|
||||
import re
|
||||
|
||||
import urllib
|
||||
import urlparse
|
||||
from channelselector import get_thumb
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
@@ -83,7 +84,7 @@ def search(item, texto):
|
||||
logger.info()
|
||||
if texto != "":
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/search?q=" + texto
|
||||
item.url = host + "search?q=" + texto
|
||||
item.extra = "busqueda"
|
||||
try:
|
||||
return list_all(item)
|
||||
|
||||
@@ -9,9 +9,8 @@ from core import scrapertoolsV2, httptools, servertools, tmdb, support
|
||||
from core.item import Item
|
||||
from lib import unshortenit
|
||||
from platformcode import logger, config
|
||||
from specials import autoplay
|
||||
|
||||
#impostati dinamicamente da getUrl()
|
||||
#impostati dinamicamente da findhost()
|
||||
host = ""
|
||||
headers = ""
|
||||
|
||||
@@ -36,65 +35,54 @@ blacklist = ['BENVENUTI', 'Richieste Serie TV', 'CB01.UNO ▶ TROVA L’
|
||||
'Openload: la situazione. Benvenuto Verystream', 'Openload: lo volete ancora?']
|
||||
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
findhost()
|
||||
film = [
|
||||
('HD', ['', 'menu', 'Film HD Streaming']),
|
||||
('Generi', ['', 'menu', 'Film per Genere']),
|
||||
('Anni', ['', 'menu', 'Film per Anno'])
|
||||
]
|
||||
tvshow = ['/serietv/',
|
||||
('Per Lettera', ['/serietv/', 'menu', 'Serie-Tv per Lettera']),
|
||||
('Per Genere', ['/serietv/', 'menu', 'Serie-Tv per Genere']),
|
||||
('Per anno', ['/serietv/', 'menu', 'Serie-Tv per Anno'])
|
||||
]
|
||||
|
||||
autoplay.init(item.channel, list_servers, list_quality)
|
||||
|
||||
# Main options
|
||||
itemlist = []
|
||||
support.menu(itemlist, 'Ultimi 100 Film Aggiornati bold', 'last', host + '/lista-film-ultimi-100-film-aggiornati/')
|
||||
|
||||
support.menu(itemlist, 'Film bold', 'peliculas', host)
|
||||
support.menu(itemlist, 'HD submenu', 'menu', host, args="Film HD Streaming")
|
||||
support.menu(itemlist, 'Per genere submenu', 'menu', host, args="Film per Genere")
|
||||
support.menu(itemlist, 'Per anno submenu', 'menu', host, args="Film per Anno")
|
||||
support.menu(itemlist, 'Cerca film... submenu', 'search', host, args='film')
|
||||
|
||||
support.menu(itemlist, 'Serie TV bold', 'peliculas', host + '/serietv/', contentType='tvshow')
|
||||
support.menu(itemlist, 'Aggiornamenti serie tv', 'last', host + '/serietv/aggiornamento-quotidiano-serie-tv/', contentType='tvshow')
|
||||
support.menu(itemlist, 'Per Lettera submenu', 'menu', host + '/serietv/', contentType='tvshow', args="Serie-Tv per Lettera")
|
||||
support.menu(itemlist, 'Per Genere submenu', 'menu', host + '/serietv/', contentType='tvshow', args="Serie-Tv per Genere")
|
||||
support.menu(itemlist, 'Per anno submenu', 'menu', host + '/serietv/', contentType='tvshow', args="Serie-Tv per Anno")
|
||||
support.menu(itemlist, 'Cerca serie... submenu', 'search', host + '/serietv/', contentType='tvshow', args='serie')
|
||||
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
|
||||
@support.scrape
|
||||
def menu(item):
|
||||
findhost()
|
||||
itemlist= []
|
||||
data = httptools.downloadpage(item.url, headers=headers).data
|
||||
data = re.sub('\n|\t', '', data)
|
||||
block = scrapertoolsV2.find_single_match(data, item.args + r'<span.*?><\/span>.*?<ul.*?>(.*?)<\/ul>')
|
||||
support.log('MENU BLOCK= ',block)
|
||||
patron = r'href="?([^">]+)"?>(.*?)<\/a>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(block)
|
||||
for scrapedurl, scrapedtitle in matches:
|
||||
itemlist.append(
|
||||
Item(
|
||||
channel=item.channel,
|
||||
title=scrapedtitle,
|
||||
contentType=item.contentType,
|
||||
action='peliculas',
|
||||
url=host + scrapedurl
|
||||
)
|
||||
)
|
||||
|
||||
return support.thumb(itemlist)
|
||||
patronBlock = item.args + r'<span.*?><\/span>.*?<ul.*?>(?P<block>.*?)<\/ul>'
|
||||
patronMenu = r'href="?(?P<url>[^">]+)"?>(?P<title>.*?)<\/a>'
|
||||
action = 'peliculas'
|
||||
|
||||
return locals()
|
||||
|
||||
|
||||
@support.scrape
|
||||
def newest(categoria):
|
||||
findhost()
|
||||
debug = True
|
||||
item = Item()
|
||||
item.contentType = 'movie'
|
||||
item.url = host + '/lista-film-ultimi-100-film-aggiunti/'
|
||||
patron = "<a href=(?P<url>[^>]+)>(?P<title>[^<([]+)(?:\[(?P<quality>[A-Z]+)\])?\s\((?P<year>[0-9]{4})\)<\/a>"
|
||||
patronBlock = r'Ultimi 100 film aggiunti:.*?<\/td>'
|
||||
|
||||
return locals()
|
||||
|
||||
|
||||
def search(item, text):
|
||||
support.log(item.url, "search" ,text)
|
||||
|
||||
support.log(item.url, "search", text)
|
||||
|
||||
try:
|
||||
item.url = item.url + "/?s=" + text.replace(' ','+')
|
||||
item.url = item.url + "/?s=" + text.replace(' ', '+')
|
||||
return peliculas(item)
|
||||
|
||||
# Continua la ricerca in caso di errore
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
@@ -102,129 +90,28 @@ def search(item, text):
|
||||
return []
|
||||
|
||||
|
||||
def newest(categoria):
|
||||
findhost()
|
||||
itemlist = []
|
||||
item = Item()
|
||||
item.contentType = 'movie'
|
||||
item.url = host + '/lista-film-ultimi-100-film-aggiunti/'
|
||||
return support.scrape(item, r'<a href=([^>]+)>([^<([]+)(?:\[([A-Z]+)\])?\s\(([0-9]{4})\)<\/a>',
|
||||
['url', 'title', 'quality', 'year'],
|
||||
patron_block=r'Ultimi 100 film aggiunti:.*?<\/td>')
|
||||
|
||||
|
||||
def last(item):
|
||||
support.log()
|
||||
|
||||
itemlist = []
|
||||
infoLabels = {}
|
||||
quality = ''
|
||||
PERPAGE = 30
|
||||
page = 1
|
||||
count = 0
|
||||
|
||||
if item.page:
|
||||
page = item.page
|
||||
|
||||
if item.contentType == 'tvshow':
|
||||
matches = support.match(item, r'<a href="([^">]+)".*?>([^(:(|[)]+)([^<]+)<\/a>', '<article class="sequex-post-content.*?</article>', headers)[0]
|
||||
else:
|
||||
matches = support.match(item, r'<a href=([^>]+)>([^(:(|[)]+)([^<]+)<\/a>', r'<strong>Ultimi 100 film Aggiornati:<\/a><\/strong>(.*?)<td>', headers)[0]
|
||||
|
||||
for i, (url, title, info) in enumerate(matches):
|
||||
if (page - 1) * PERPAGE > i - count: continue
|
||||
if i - count >= page * PERPAGE: break
|
||||
add = True
|
||||
title = title.rstrip()
|
||||
if item.contentType == 'tvshow':
|
||||
for i in itemlist:
|
||||
if i.url == url: # togliamo i doppi
|
||||
count = count + 1
|
||||
add = False
|
||||
else:
|
||||
infoLabels['year'] = scrapertoolsV2.find_single_match(info, r'\(([0-9]+)\)')
|
||||
quality = scrapertoolsV2.find_single_match(info, r'\[([A-Z]+)\]')
|
||||
|
||||
if quality:
|
||||
longtitle = title + support.typo(quality,'_ [] color kod')
|
||||
else:
|
||||
longtitle = title
|
||||
|
||||
if add:
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action='findvideos' if item.contentType == 'movie' else 'episodios',
|
||||
contentType=item.contentType,
|
||||
title=longtitle,
|
||||
fulltitle=title,
|
||||
show=title,
|
||||
quality=quality,
|
||||
url=url,
|
||||
infoLabels=infoLabels
|
||||
)
|
||||
)
|
||||
support.pagination(itemlist, item, page, PERPAGE)
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
support.log()
|
||||
if item.contentType == 'movie' or '/serietv/' not in item.url:
|
||||
patron = r'<div class="?card-image"?>.*?<img src="?([^" ]+)"? alt.*?<a href="?([^" >]+)(?:\/|")>([^<[(]+)(?:\[([A-Za-z0-9/-]+)])? (?:\(([0-9]{4})\))?.*?<strong>([^<>&]+).*?DURATA ([0-9]+).*?<br(?: /)?>([^<>]+)'
|
||||
listGroups = ['thumb', 'url', 'title', 'quality', 'year', 'genre', 'duration', 'plot']
|
||||
if '/serietv/' not in item.url:
|
||||
patron = r'<div class="?card-image"?>.*?<img src="?(?P<thumb>[^" ]+)"? alt.*?<a href="?(?P<url>[^" >]+)(?:\/|")>(?P<title>[^<[(]+)(?:\[(?P<quality>[A-Za-z0-9/-]+)])? (?:\((?P<year>[0-9]{4})\))?.*?<strong>(?P<genre>[^<>&]+).*?DURATA (?P<duration>[0-9]+).*?<br(?: /)?>(?P<plot>[^<>]+)'
|
||||
action = 'findvideos'
|
||||
else:
|
||||
patron = r'div class="card-image">.*?<img src="([^ ]+)" alt.*?<a href="([^ >]+)">([^<[(]+)<\/a>.*?<strong><span style="[^"]+">([^<>0-9(]+)\(([0-9]{4}).*?</(?:p|div)>(.*?)</div'
|
||||
listGroups = ['thumb', 'url', 'title', 'genre', 'year', 'plot']
|
||||
patron = r'div class="card-image">.*?<img src="(?P<thumb>[^ ]+)" alt.*?<a href="(?P<url>[^ >]+)">(?P<title>[^<[(]+)<\/a>.*?<strong><span style="[^"]+">(?P<genre>[^<>0-9(]+)\((?P<year>[0-9]{4}).*?</(?:p|div)>(?P<plot>.*?)</div'
|
||||
action = 'episodios'
|
||||
|
||||
return support.scrape(item, patron_block=[r'<div class="?sequex-page-left"?>(.*?)<aside class="?sequex-page-right"?>',
|
||||
'<div class="?card-image"?>.*?(?=<div class="?card-image"?>|<div class="?rating"?>)'],
|
||||
patron=patron, listGroups=listGroups,
|
||||
patronNext='<a class="?page-link"? href="?([^>]+)"?><i class="fa fa-angle-right">', blacklist=blacklist, action=action)
|
||||
# patronBlock=[r'<div class="?sequex-page-left"?>(?P<block>.*?)<aside class="?sequex-page-right"?>',
|
||||
# '<div class="?card-image"?>.*?(?=<div class="?card-image"?>|<div class="?rating"?>)']
|
||||
patronNext='<a class="?page-link"? href="?([^>]+)"?><i class="fa fa-angle-right">'
|
||||
|
||||
return locals()
|
||||
|
||||
|
||||
@support.scrape
|
||||
def episodios(item):
|
||||
itemlist = []
|
||||
patronBlock = r'(?P<block><div class="sp-head[a-z ]*?" title="Espandi">\s*STAGIONE [0-9]+ - (?P<lang>[^\s]+)(?: - (?P<quality>[^-<]+))?.*?[^<>]*?</div>.*?)<div class="spdiv">\[riduci\]</div>'
|
||||
patron = '(?:<p>)(?P<episode>[0-9]+(?:×|×)[0-9]+)(?P<url>.*?)(?:</p>|<br)'
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
matches = scrapertoolsV2.find_multiple_matches(data,
|
||||
r'(<div class="sp-head[a-z ]*?" title="Espandi">[^<>]*?</div>.*?)<div class="spdiv">\[riduci\]</div>')
|
||||
|
||||
for match in matches:
|
||||
support.log(match)
|
||||
blocks = scrapertoolsV2.find_multiple_matches(match, '(?:<p>)(.*?)(?:</p>|<br)')
|
||||
season = scrapertoolsV2.find_single_match(match, r'title="Espandi">.*?STAGIONE\s+\d+([^<>]+)').strip()
|
||||
|
||||
for block in blocks:
|
||||
episode = scrapertoolsV2.find_single_match(block, r'([0-9]+(?:×|×)[0-9]+)').strip()
|
||||
seasons_n = scrapertoolsV2.find_single_match(block, r'<strong>STAGIONE\s+\d+([^<>]+)').strip()
|
||||
|
||||
if seasons_n:
|
||||
season = seasons_n
|
||||
|
||||
if not episode: continue
|
||||
|
||||
season = re.sub(r'–|–', "-", season)
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="findvideos",
|
||||
contentType='episode',
|
||||
title="[B]" + episode + "[/B] " + season,
|
||||
fulltitle=episode + " " + season,
|
||||
show=episode + " " + season,
|
||||
url=block,
|
||||
extra=item.extra,
|
||||
thumbnail=item.thumbnail,
|
||||
infoLabels=item.infoLabels
|
||||
))
|
||||
|
||||
support.videolibrary(itemlist, item)
|
||||
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
|
||||
@@ -28,46 +28,25 @@ host = config.get_channel_url(__channel__)
|
||||
|
||||
headers = [['Referer', host]]
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
logger.info('[cinemalibero.py] mainlist')
|
||||
|
||||
autoplay.init(item.channel, list_servers, list_quality) # Necessario per Autoplay
|
||||
|
||||
# Menu Principale
|
||||
itemlist = []
|
||||
support.menu(itemlist, 'Film bold', 'video', host+'/category/film/')
|
||||
support.menu(itemlist, 'Generi submenu', 'genres', host)
|
||||
support.menu(itemlist, 'Cerca film submenu', 'search', host)
|
||||
support.menu(itemlist, 'Serie TV bold', 'video', host+'/category/serie-tv/', contentType='episode')
|
||||
support.menu(itemlist, 'Anime submenu', 'video', host+'/category/anime-giapponesi/', contentType='episode')
|
||||
support.menu(itemlist, 'Cerca serie submenu', 'search', host, contentType='episode')
|
||||
support.menu(itemlist, 'Sport bold', 'video', host+'/category/sport/')
|
||||
|
||||
autoplay.show_option(item.channel, itemlist) # Necessario per Autoplay (Menu Configurazione)
|
||||
|
||||
support.channel_config(item, itemlist)
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info("[cinemalibero.py] " + item.url + " search " + texto)
|
||||
item.url = host + "/?s=" + texto
|
||||
try:
|
||||
return video(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
film = '/category/film/'
|
||||
filmSub = [
|
||||
('Generi', ['', 'genres']),
|
||||
('Sport', ['/category/sport/', 'peliculas']),
|
||||
]
|
||||
tvshow = '/category/serie-tv/'
|
||||
tvshowSub = [
|
||||
('Anime ', ['/category/anime-giapponesi/', 'video'])
|
||||
]
|
||||
|
||||
return locals()
|
||||
|
||||
def genres(item):
|
||||
return support.scrape(item, patron_block=r'<div id="bordobar" class="dropdown-menu(.*?)</li>', patron=r'<a class="dropdown-item" href="([^"]+)" title="([A-z]+)"', listGroups=['url', 'title'], action='video')
|
||||
return support.scrape2(item, patronBlock=r'<div id="bordobar" class="dropdown-menu(?P<block>.*)</li>', patron=r'<a class="dropdown-item" href="([^"]+)" title="([A-z]+)"', listGroups=['url', 'title'], action='video')
|
||||
|
||||
|
||||
def video(item):
|
||||
def peliculas(item):
|
||||
logger.info('[cinemalibero.py] video')
|
||||
itemlist = []
|
||||
|
||||
|
||||
@@ -3,8 +3,9 @@
|
||||
import os
|
||||
import re
|
||||
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
from core.item import Item
|
||||
from platformcode import config, logger
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
|
||||
host = 'https://www.cliphunter.com'
|
||||
|
||||
@@ -84,9 +84,8 @@ def lista(item):
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
year = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail, plot=plot,
|
||||
fanart=thumbnail, contentTitle = title, infoLabels={'year':year} ))
|
||||
fanart=thumbnail, contentTitle = title ))
|
||||
next_page = scrapertools.find_single_match(data,'<li class="arrow"><a rel="next" href="([^"]+)">»</a>')
|
||||
if next_page!="":
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
@@ -103,7 +102,7 @@ def play(item):
|
||||
for scrapedurl in matches:
|
||||
scrapedurl = scrapedurl.replace("\/", "/")
|
||||
title = scrapedurl
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
|
||||
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo"))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,16 +1,15 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host ='http://www.coomelonitas.com'
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
@@ -57,7 +56,7 @@ def lista(item):
|
||||
url = scrapertools.find_single_match(match,'<a href="([^"]+)"')
|
||||
plot = scrapertools.find_single_match(match,'<p class="summary">(.*?)</p>')
|
||||
thumbnail = scrapertools.find_single_match(match,'<img src="([^"]+)"')
|
||||
itemlist.append( Item(channel=item.channel, action="findvideos", title=title, fulltitle=title, url=url,
|
||||
itemlist.append( Item(channel=item.channel, action="findvideos", title=title, url=url,
|
||||
fanart=thumbnail, thumbnail=thumbnail, plot=plot, viewmode="movie") )
|
||||
next_page = scrapertools.find_single_match(data,'<a href="([^"]+)" class="siguiente">')
|
||||
if next_page!="":
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
|
||||
import re
|
||||
import urllib
|
||||
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
@@ -11,23 +10,22 @@ from core.item import Item
|
||||
from platformcode import config, logger
|
||||
|
||||
|
||||
host = 'https://www.cumlouder.com'
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
config.set_setting("url_error", False, "cumlouder")
|
||||
itemlist.append(item.clone(title="Últimos videos", action="videos", url="https://www.cumlouder.com/"))
|
||||
itemlist.append(item.clone(title="Categorias", action="categorias", url="https://www.cumlouder.com/categories/"))
|
||||
itemlist.append(item.clone(title="Pornstars", action="pornstars_list", url="https://www.cumlouder.com/girls/"))
|
||||
itemlist.append(item.clone(title="Listas", action="series", url="https://www.cumlouder.com/series/"))
|
||||
itemlist.append(item.clone(title="Buscar", action="search", url="https://www.cumlouder.com/search?q=%s"))
|
||||
|
||||
itemlist.append(item.clone(title="Últimos videos", action="videos", url= host + "/porn/"))
|
||||
itemlist.append(item.clone(title="Pornstars", action="pornstars_list", url=host + "/girls/"))
|
||||
itemlist.append(item.clone(title="Listas", action="series", url= host + "/series/"))
|
||||
itemlist.append(item.clone(title="Categorias", action="categorias", url= host + "/categories/"))
|
||||
itemlist.append(item.clone(title="Buscar", action="search", url= host + "/search?q=%s"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
|
||||
item.url = item.url % texto
|
||||
item.action = "videos"
|
||||
try:
|
||||
@@ -41,21 +39,20 @@ def search(item, texto):
|
||||
def pornstars_list(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append(item.clone(title="Mas Populares", action="pornstars", url=host + "/girls/1/"))
|
||||
for letra in "abcdefghijklmnopqrstuvwxyz":
|
||||
itemlist.append(item.clone(title=letra.upper(), url=urlparse.urljoin(item.url, letra), action="pornstars"))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def pornstars(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = get_data(item.url)
|
||||
patron = '<a girl-url="[^"]+" class="[^"]+" href="([^"]+)" title="([^"]+)">[^<]+'
|
||||
patron += '<img class="thumb" src="([^"]+)" [^<]+<h2[^<]+<span[^<]+</span[^<]+</h2[^<]+'
|
||||
patron += '<span[^<]+<span[^<]+<span[^<]+</span>([^<]+)</span>'
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = '<a girl-url=.*?'
|
||||
patron += 'href="([^"]+)" title="([^"]+)">.*?'
|
||||
patron += 'data-lazy="([^"]+)".*?'
|
||||
patron += '<span class="ico-videos sprite"></span>([^<]+)</span>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for url, title, thumbnail, count in matches:
|
||||
if "go.php?" in url:
|
||||
@@ -65,8 +62,7 @@ def pornstars(item):
|
||||
url = urlparse.urljoin(item.url, url)
|
||||
if not thumbnail.startswith("https"):
|
||||
thumbnail = "https:%s" % thumbnail
|
||||
itemlist.append(item.clone(title="%s (%s)" % (title, count), url=url, action="videos", thumbnail=thumbnail))
|
||||
|
||||
itemlist.append(item.clone(title="%s (%s)" % (title, count), url=url, action="videos", fanart=thumbnail, thumbnail=thumbnail))
|
||||
# Paginador
|
||||
matches = re.compile('<li[^<]+<a href="([^"]+)" rel="nofollow">Next[^<]+</a[^<]+</li>', re.DOTALL).findall(data)
|
||||
if matches:
|
||||
@@ -74,18 +70,19 @@ def pornstars(item):
|
||||
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
|
||||
else:
|
||||
url = urlparse.urljoin(item.url, matches[0])
|
||||
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
|
||||
|
||||
itemlist.append(item.clone(title="Página Siguiente >>", url=url))
|
||||
return itemlist
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = get_data(item.url)
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t|\s{2}| ", "", data)
|
||||
patron = '<a tag-url=.*?href="([^"]+)" title="([^"]+)".*?<img class="thumb" src="([^"]+)".*?<span class="cantidad">([^<]+)</span>'
|
||||
patron = '<a tag-url=.*?'
|
||||
patron += 'href="([^"]+)" title="([^"]+)".*?'
|
||||
patron += 'data-lazy="([^"]+)".*?'
|
||||
patron += '<span class="cantidad">([^<]+)</span>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for url, title, thumbnail, count in matches:
|
||||
if "go.php?" in url:
|
||||
@@ -96,8 +93,7 @@ def categorias(item):
|
||||
if not thumbnail.startswith("https"):
|
||||
thumbnail = "https:%s" % thumbnail
|
||||
itemlist.append(
|
||||
item.clone(title="%s (%s videos)" % (title, count), url=url, action="videos", thumbnail=thumbnail))
|
||||
|
||||
item.clone(title="%s (%s videos)" % (title, count), url=url, action="videos", fanart=thumbnail, thumbnail=thumbnail))
|
||||
# Paginador
|
||||
matches = re.compile('<li[^<]+<a href="([^"]+)" rel="nofollow">Next[^<]+</a[^<]+</li>', re.DOTALL).findall(data)
|
||||
if matches:
|
||||
@@ -105,22 +101,20 @@ def categorias(item):
|
||||
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
|
||||
else:
|
||||
url = urlparse.urljoin(item.url, matches[0])
|
||||
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
|
||||
|
||||
itemlist.append(item.clone(title="Página Siguiente >>", url=url))
|
||||
return itemlist
|
||||
|
||||
|
||||
def series(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = get_data(item.url)
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t|\s{2}| ", "", data)
|
||||
patron = '<a onclick=.*?href="([^"]+)".*?\<img src="([^"]+)".*?h2 itemprop="name">([^<]+).*?p>([^<]+)</p>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for url, thumbnail, title, count in matches:
|
||||
itemlist.append(
|
||||
item.clone(title="%s (%s) " % (title, count), url=urlparse.urljoin(item.url, url), action="videos", thumbnail=thumbnail))
|
||||
|
||||
item.clone(title="%s (%s) " % (title, count), url=urlparse.urljoin(item.url, url), action="videos", fanart=thumbnail, thumbnail=thumbnail))
|
||||
# Paginador
|
||||
matches = re.compile('<li[^<]+<a href="([^"]+)" rel="nofollow">Next[^<]+</a[^<]+</li>', re.DOTALL).findall(data)
|
||||
if matches:
|
||||
@@ -128,29 +122,33 @@ def series(item):
|
||||
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
|
||||
else:
|
||||
url = urlparse.urljoin(item.url, matches[0])
|
||||
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
|
||||
|
||||
itemlist.append(item.clone(title="Página Siguiente >>", url=url))
|
||||
return itemlist
|
||||
|
||||
|
||||
def videos(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = get_data(item.url)
|
||||
patron = '<a class="muestra-escena" href="([^"]+)" title="([^"]+)"[^<]+<img class="thumb" src="([^"]+)".*?<span class="minutos"> <span class="ico-minutos sprite"></span> ([^<]+)</span>'
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = '<a class="muestra-escena" href="([^"]+)" title="([^"]+)".*?'
|
||||
patron += 'data-lazy="([^"]+)".*?'
|
||||
patron += '<span class="ico-minutos sprite"></span>([^<]+)</span>(.*?)</a>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for url, title, thumbnail, duration in matches:
|
||||
for url, title, thumbnail, duration,calidad in matches:
|
||||
if "hd sprite" in calidad:
|
||||
title="[COLOR yellow] %s [/COLOR][COLOR red] HD [/COLOR] %s" % (duration, title)
|
||||
else:
|
||||
title="[COLOR yellow] %s [/COLOR] %s" % (duration, title)
|
||||
if "go.php?" in url:
|
||||
url = urllib.unquote(url.split("/go.php?u=")[1].split("&")[0])
|
||||
thumbnail = urllib.unquote(thumbnail.split("/go.php?u=")[1].split("&")[0])
|
||||
else:
|
||||
url = urlparse.urljoin("https://www.cumlouder.com", url)
|
||||
url = urlparse.urljoin(host, url)
|
||||
if not thumbnail.startswith("https"):
|
||||
thumbnail = "https:%s" % thumbnail
|
||||
itemlist.append(item.clone(title="%s (%s)" % (title, duration), url=urlparse.urljoin(item.url, url),
|
||||
itemlist.append(item.clone(title=title, url=url,
|
||||
action="play", thumbnail=thumbnail, contentThumbnail=thumbnail,
|
||||
contentType="movie", contentTitle=title))
|
||||
|
||||
fanart=thumbnail, contentType="movie", contentTitle=title))
|
||||
# Paginador
|
||||
nextpage = scrapertools.find_single_match(data, '<ul class="paginador"(.*?)</ul>')
|
||||
matches = re.compile('<a href="([^"]+)" rel="nofollow">Next »</a>', re.DOTALL).findall(nextpage)
|
||||
@@ -161,51 +159,22 @@ def videos(item):
|
||||
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
|
||||
else:
|
||||
url = urlparse.urljoin(item.url, matches[0])
|
||||
|
||||
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
|
||||
|
||||
itemlist.append(item.clone(title="Página Siguiente >>", url=url))
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = get_data(item.url)
|
||||
patron = '<source src="([^"]+)" type=\'video/([^\']+)\' label=\'[^\']+\' res=\'([^\']+)\' />'
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = '<source src="([^"]+)" type=\'video/([^\']+)\' label=\'[^\']+\' res=\'([^\']+)\''
|
||||
url, type, res = re.compile(patron, re.DOTALL).findall(data)[0]
|
||||
if "go.php?" in url:
|
||||
url = urllib.unquote(url.split("/go.php?u=")[1].split("&")[0])
|
||||
elif not url.startswith("http"):
|
||||
url = "http:" + url.replace("&", "&")
|
||||
url = "https:" + url.replace("&", "&")
|
||||
itemlist.append(
|
||||
Item(channel='cumlouder', action="play", title='Video' + res, fulltitle=type.upper() + ' ' + res, url=url,
|
||||
Item(channel='cumlouder', action="play", title='Video' + res, contentTitle=type.upper() + ' ' + res, url=url,
|
||||
server="directo", folder=False))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def get_data(url_orig):
|
||||
try:
|
||||
if config.get_setting("url_error", "cumlouder"):
|
||||
raise Exception
|
||||
response = httptools.downloadpage(url_orig)
|
||||
if not response.data or "urlopen error [Errno 1]" in str(response.code):
|
||||
raise Exception
|
||||
except:
|
||||
config.set_setting("url_error", True, "cumlouder")
|
||||
import random
|
||||
server_random = ['nl', 'de', 'us']
|
||||
server = server_random[random.randint(0, 2)]
|
||||
url = "https://%s.hideproxy.me/includes/process.php?action=update" % server
|
||||
post = "u=%s&proxy_formdata_server=%s&allowCookies=1&encodeURL=0&encodePage=0&stripObjects=0&stripJS=0&go=" \
|
||||
% (urllib.quote(url_orig), server)
|
||||
while True:
|
||||
response = httptools.downloadpage(url, post, follow_redirects=False)
|
||||
if response.headers.get("location"):
|
||||
url = response.headers["location"]
|
||||
post = ""
|
||||
else:
|
||||
break
|
||||
|
||||
return response.data
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "czechvideo",
|
||||
"name": "Czechvideo",
|
||||
"active": true,
|
||||
"active": false,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "http://czechvideo.org/templates/Default/images/black75.png",
|
||||
|
||||
@@ -1,15 +1,13 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://czechvideo.org'
|
||||
|
||||
@@ -82,7 +80,7 @@ def play(item):
|
||||
itemlist = servertools.find_video_items(data=data)
|
||||
for videoitem in itemlist:
|
||||
videoitem.title = item.title
|
||||
videoitem.fulltitle = item.fulltitle
|
||||
videoitem.contentTitle = item.contentTitle
|
||||
videoitem.thumbnail = item.thumbnail
|
||||
videoitem.channel = item.channel
|
||||
return itemlist
|
||||
|
||||
@@ -4,8 +4,7 @@ import re
|
||||
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
@@ -26,49 +25,39 @@ def search(item, texto):
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
# Descarga la pagina
|
||||
data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)", "", httptools.downloadpage(item.url).data)
|
||||
|
||||
# Extrae las entradas
|
||||
patron = '<div class="videobox">\s*<a href="([^"]+)".*?url\(\'([^\']+)\'.*?<span>(.*?)<\/span><\/div><\/a>.*?class="title">(.*?)<\/a><span class="views">.*?<\/a><\/span><\/div> '
|
||||
patron = '<div class="videobox">\s*<a href="([^"]+)".*?'
|
||||
patron += 'url\(\'([^\']+)\'.*?'
|
||||
patron += '<span>(.*?)<\/span>.*?'
|
||||
patron += 'class="title">(.*?)<\/a>'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl, scrapedthumbnail, duration, scrapedtitle in matches:
|
||||
if "/embed-" not in scrapedurl:
|
||||
#scrapedurl = scrapedurl.replace("dato.porn/", "dato.porn/embed-") + ".html"
|
||||
scrapedurl = scrapedurl.replace("datoporn.co/", "datoporn.co/embed-") + ".html"
|
||||
if duration:
|
||||
scrapedtitle = "%s - %s" % (duration, scrapedtitle)
|
||||
scrapedtitle += ' gb'
|
||||
scrapedtitle = scrapedtitle.replace(":", "'")
|
||||
|
||||
#logger.debug(scrapedurl + ' / ' + scrapedthumbnail + ' / ' + duration + ' / ' + scrapedtitle)
|
||||
itemlist.append(item.clone(action="play", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
|
||||
server="datoporn", fanart=scrapedthumbnail.replace("_t.jpg", ".jpg")))
|
||||
|
||||
# Extrae la marca de siguiente página
|
||||
#next_page = scrapertools.find_single_match(data, '<a href=["|\']([^["|\']+)["|\']>Next')
|
||||
if not config.get_setting('unify'):
|
||||
scrapedtitle = '[COLOR yellow] %s [/COLOR] %s' % (duration , scrapedtitle)
|
||||
else:
|
||||
scrapedtitle += ' gb'
|
||||
scrapedtitle = "[COLOR yellow]" + duration + "[/COLOR] " + scrapedtitle
|
||||
scrapedtitle = scrapedtitle.replace(":", "'")
|
||||
# logger.debug(scrapedurl + ' / ' + scrapedthumbnail + ' / ' + duration + ' / ' + scrapedtitle)
|
||||
itemlist.append(item.clone(action="play", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail, server="datoporn",
|
||||
fanart=scrapedthumbnail.replace("_t.jpg", ".jpg"), plot = ""))
|
||||
next_page = scrapertools.find_single_match(data, '<a class=["|\']page-link["|\'] href=["|\']([^["|\']+)["|\']>Next')
|
||||
if next_page and itemlist:
|
||||
itemlist.append(item.clone(action="lista", title=">> Página Siguiente", url=next_page))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
# Descarga la pagina
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
# Extrae las entradas (carpetas)
|
||||
patron = '<div class="vid_block">\s*<a href="([^"]+)".*?url\((.*?)\).*?<span>(.*?)</span>.*?<b>(.*?)</b>'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl, scrapedthumbnail, numero, scrapedtitle in matches:
|
||||
if numero:
|
||||
scrapedtitle = "%s (%s)" % (scrapedtitle, numero)
|
||||
|
||||
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,38 +1,37 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import re
|
||||
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
from core import jsontools
|
||||
from core import scrapertools
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
|
||||
host = 'http://www.eporner.com'
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
itemlist.append(item.clone(title="Últimos videos", action="videos", url="http://www.eporner.com/0/"))
|
||||
itemlist.append(item.clone(title="Categorias", action="categorias", url="http://www.eporner.com/categories/"))
|
||||
itemlist.append(item.clone(title="Pornstars", action="pornstars_list", url="http://www.eporner.com/pornstars/"))
|
||||
itemlist.append(item.clone(title="Buscar", action="search", url="http://www.eporner.com/search/%s/"))
|
||||
|
||||
itemlist.append(item.clone(title="Últimos videos", action="videos", url=host + "/0/"))
|
||||
itemlist.append(item.clone(title="Más visto", action="videos", url=host + "/most-viewed/"))
|
||||
itemlist.append(item.clone(title="Mejor valorado", action="videos", url=host + "/top-rated/"))
|
||||
itemlist.append(item.clone(title="Categorias", action="categorias", url=host + "/categories/"))
|
||||
itemlist.append(item.clone(title="Pornstars", action="pornstars", url=host + "/pornstars/"))
|
||||
itemlist.append(item.clone(title=" Alfabetico", action="pornstars_list", url=host + "/pornstars/"))
|
||||
itemlist.append(item.clone(title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
|
||||
item.url = item.url % texto
|
||||
item.action = "videos"
|
||||
|
||||
texto = texto.replace(" ", "-")
|
||||
item.url = host + "/search/%s/" % texto
|
||||
try:
|
||||
return videos(item)
|
||||
except:
|
||||
import traceback
|
||||
logger.error(traceback.format_exc())
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
@@ -41,71 +40,67 @@ def pornstars_list(item):
|
||||
itemlist = []
|
||||
for letra in "ABCDEFGHIJKLMNOPQRSTUVWXYZ":
|
||||
itemlist.append(item.clone(title=letra, url=urlparse.urljoin(item.url, letra), action="pornstars"))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def pornstars(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
patron = '<div class="mbtit" itemprop="name"><a href="([^"]+)" title="([^"]+)">[^<]+</a></div> '
|
||||
patron += '<a href="[^"]+" title="[^"]+"> <img src="([^"]+)" alt="[^"]+" style="width:190px;height:152px;" /> </a> '
|
||||
patron = '<div class="mbprofile">.*?'
|
||||
patron += '<a href="([^"]+)" title="([^"]+)">.*?'
|
||||
patron += '<img src="([^"]+)".*?'
|
||||
patron += '<div class="mbtim"><span>Videos: </span>([^<]+)</div>'
|
||||
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for url, title, thumbnail, count in matches:
|
||||
itemlist.append(
|
||||
item.clone(title="%s (%s videos)" % (title, count), url=urlparse.urljoin(item.url, url), action="videos",
|
||||
thumbnail=thumbnail))
|
||||
|
||||
# Paginador
|
||||
patron = "<span style='color:#FFCC00;'>[^<]+</span></a> <a href='([^']+)' title='[^']+'><span>[^<]+</span></a>"
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
if matches:
|
||||
itemlist.append(item.clone(title="Pagina siguiente", url=urlparse.urljoin(item.url, matches[0])))
|
||||
|
||||
# Paginador
|
||||
next_page = scrapertools.find_single_match(data,"<a href='([^']+)' class='nmnext' title='Next page'>")
|
||||
if next_page!="":
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append(item.clone(action="pornstars", title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
patron = '<div class="categoriesbox" id="[^"]+"> <div class="ctbinner"> <a href="([^"]+)" title="[^"]+"> <img src="([^"]+)" alt="[^"]+"> <h2>([^"]+)</h2> </a> </div> </div>'
|
||||
|
||||
patron = '<span class="addrem-cat">.*?'
|
||||
patron += '<a href="([^"]+)" title="([^"]+)">.*?'
|
||||
patron +='<div class="cllnumber">([^<]+)</div>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for url, thumbnail, title in matches:
|
||||
itemlist.append(
|
||||
item.clone(title=title, url=urlparse.urljoin(item.url, url), action="videos", thumbnail=thumbnail))
|
||||
|
||||
for url, title, cantidad in matches:
|
||||
url = urlparse.urljoin(item.url, url)
|
||||
title = title + " " + cantidad
|
||||
thumbnail = ""
|
||||
if not thumbnail:
|
||||
thumbnail = scrapertools.find_single_match(data,'<img src="([^"]+)" alt="%s"> % title')
|
||||
itemlist.append(item.clone(title=title, url=url, action="videos", thumbnail=thumbnail))
|
||||
return sorted(itemlist, key=lambda i: i.title)
|
||||
|
||||
|
||||
def videos(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
patron = '<a href="([^"]+)" title="([^"]+)" id="[^"]+">.*?<img id="[^"]+" src="([^"]+)"[^>]+>.*?<div class="mbtim">([^<]+)</div>'
|
||||
|
||||
patron = '<div class="mvhdico"><span>([^<]+)</span>.*?'
|
||||
patron += '<a href="([^"]+)" title="([^"]+)" id="[^"]+">.*?'
|
||||
patron += 'src="([^"]+)"[^>]+>.*?'
|
||||
patron += '<div class="mbtim">([^<]+)</div>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for url, title, thumbnail, duration in matches:
|
||||
itemlist.append(item.clone(title="%s (%s)" % (title, duration), url=urlparse.urljoin(item.url, url),
|
||||
for quality, url, title, thumbnail, duration in matches:
|
||||
title = "[COLOR yellow]" + duration + "[/COLOR] " + "[COLOR red]" + quality + "[/COLOR] " +title
|
||||
itemlist.append(item.clone(title=title, url=urlparse.urljoin(item.url, url),
|
||||
action="play", thumbnail=thumbnail, contentThumbnail=thumbnail,
|
||||
contentType="movie", contentTitle=title))
|
||||
|
||||
# Paginador
|
||||
patron = "<span style='color:#FFCC00;'>[^<]+</span></a> <a href='([^']+)' title='[^']+'><span>[^<]+</span></a>"
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
if matches:
|
||||
itemlist.append(item.clone(title="Página siguiente", url=urlparse.urljoin(item.url, matches[0])))
|
||||
|
||||
next_page = scrapertools.find_single_match(data,"<a href='([^']+)' class='nmnext' title='Next page'>")
|
||||
if next_page!="":
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append(item.clone(action="videos", title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -135,8 +130,7 @@ def play(item):
|
||||
int(hash[16:24], 16)) + int_to_base36(int(hash[24:32], 16))
|
||||
|
||||
url = "https://www.eporner.com/xhr/video/%s?hash=%s" % (vid, hash)
|
||||
data = httptools.downloadpage(url).data
|
||||
jsondata = jsontools.load(data)
|
||||
jsondata = httptools.downloadpage(url).json
|
||||
|
||||
for source in jsondata["sources"]["mp4"]:
|
||||
url = jsondata["sources"]["mp4"][source]["src"]
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.eroticage.net'
|
||||
|
||||
|
||||
@@ -10,13 +10,5 @@
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
{
|
||||
"id": "include_in_global_search",
|
||||
"type": "bool",
|
||||
"label": "Incluir en busqueda global",
|
||||
"default": true,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import re
|
||||
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
@@ -9,7 +8,6 @@ from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
|
||||
host = "https://www.youfreeporntube.net"
|
||||
|
||||
@@ -17,11 +15,12 @@ def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title="Útimos videos",
|
||||
url= host + "/new-clips.html?&page=1"))
|
||||
url= host + "/newvideos.html?&page=1"))
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title="Populares",
|
||||
url=host + "/topvideos.html?page=1"))
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="categorias", title="Categorias", url=host + "/browse.html"))
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title="Populares",
|
||||
url=host + "/topvideo.html?page=1"))
|
||||
|
||||
itemlist.append(Item(channel=item.channel, action="search", title="Buscar",
|
||||
url=host + "/search.php?keywords="))
|
||||
return itemlist
|
||||
@@ -49,7 +48,7 @@ def categorias(item):
|
||||
patron = '<div class="pm-li-category"><a href="([^"]+)">.*?.<h3>(.*?)</h3></a>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for url, actriz in matches:
|
||||
itemlist.append(Item(channel=item.channel, action="listacategoria", title=actriz, url=url))
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title=actriz, url=url))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -58,7 +57,9 @@ def lista(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t|\s{2}", "", data)
|
||||
patron = '<li><div class=".*?<a href="([^"]+)".*?>.*?.img src="([^"]+)".*?alt="([^"]+)".*?>'
|
||||
patron = '<li><div class=".*?'
|
||||
patron += '<a href="([^"]+)".*?'
|
||||
patron += '<img src="([^"]+)".*?alt="([^"]+)"'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
itemlist = []
|
||||
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
|
||||
@@ -66,36 +67,14 @@ def lista(item):
|
||||
thumbnail = urlparse.urljoin(item.url, scrapedthumbnail)
|
||||
title = scrapedtitle.strip()
|
||||
itemlist.append(Item(channel=item.channel, action="play", thumbnail=thumbnail, fanart=thumbnail, title=title,
|
||||
fulltitle=title, url=url,
|
||||
url=url,
|
||||
viewmode="movie", folder=True))
|
||||
paginacion = scrapertools.find_single_match(data,
|
||||
'<li class="active"><a href="#" onclick="return false;">\d+</a></li><li class=""><a href="([^"]+)">')
|
||||
'<li class="active">.*?</li>.*?<a href="([^"]+)">')
|
||||
if paginacion:
|
||||
paginacion = urlparse.urljoin(item.url,paginacion)
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title=">> Página Siguiente",
|
||||
url=host + "/" + paginacion))
|
||||
return itemlist
|
||||
|
||||
|
||||
def listacategoria(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t|\s{2}", "", data)
|
||||
patron = '<li><div class=".*?<a href="([^"]+)".*?>.*?.img src="([^"]+)".*?alt="([^"]+)".*?>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
itemlist = []
|
||||
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
|
||||
url = urlparse.urljoin(item.url, scrapedurl)
|
||||
thumbnail = urlparse.urljoin(item.url, scrapedthumbnail)
|
||||
title = scrapedtitle.strip()
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="play", thumbnail=thumbnail, title=title, fulltitle=title, url=url,
|
||||
viewmode="movie", folder=True))
|
||||
paginacion = scrapertools.find_single_match(data,
|
||||
'<li class="active"><a href="#" onclick="return false;">\d+</a></li><li class=""><a href="([^"]+)">')
|
||||
if paginacion:
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="listacategoria", title=">> Página Siguiente", url=paginacion))
|
||||
url= paginacion))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -103,14 +82,9 @@ def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
item.url = scrapertools.find_single_match(data, '(?i)Playerholder.*?src="([^"]+)"')
|
||||
if "tubst.net" in item.url:
|
||||
url = scrapertools.find_single_match(data, 'itemprop="embedURL" content="([^"]+)')
|
||||
data = httptools.downloadpage(url).data
|
||||
url = scrapertools.find_single_match(data, '<iframe.*?src="([^"]+)"')
|
||||
data = httptools.downloadpage(url).data
|
||||
url = scrapertools.find_single_match(data, '<source src="([^"]+)"')
|
||||
item.url = httptools.downloadpage(url, follow_redirects=False, only_headers=True).headers.get("location", "")
|
||||
itemlist.append(item.clone())
|
||||
url = scrapertools.find_single_match(data, '<div id="video-wrapper">.*?<iframe.*?src="([^"]+)"')
|
||||
itemlist.append(item.clone(action="play", title=url, url=url ))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist)
|
||||
return itemlist
|
||||
|
||||
|
||||
|
||||
@@ -4,18 +4,10 @@
|
||||
"active": true,
|
||||
"adult": false,
|
||||
"language": ["ita"],
|
||||
"thumbnail": "https://eurostreaming.cafe/wp-content/uploads/2017/08/logocafe.png",
|
||||
"bannermenu": "https://eurostreaming.cafe/wp-content/uploads/2017/08/logocafe.png",
|
||||
"thumbnail": "eurostreaming.png",
|
||||
"banner": "eurostreaming.png",
|
||||
"categories": ["tvshow","anime","vosi"],
|
||||
"settings": [
|
||||
{
|
||||
"id": "channel_host",
|
||||
"type": "text",
|
||||
"label": "Host del canale",
|
||||
"default": "https://eurostreaming.cafe",
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
},
|
||||
"settings": [
|
||||
{
|
||||
"id": "include_in_global_search",
|
||||
"type": "bool",
|
||||
|
||||
@@ -4,146 +4,91 @@
|
||||
# by Greko
|
||||
# ------------------------------------------------------------
|
||||
"""
|
||||
Riscritto per poter usufruire del modulo support.
|
||||
Problemi noti:
|
||||
Le regex non prendono tutto...
|
||||
server versystream : 'http://vcrypt.net/very/' # VeryS non decodifica il link :http://vcrypt.net/fastshield/
|
||||
alcuni server tra cui nowvideo.club non sono implementati nella cartella servers
|
||||
Alcune sezioni di anime-cartoni non vanno, alcune hanno solo la lista degli episodi, ma non hanno link
|
||||
Alcune sezioni di anime-cartoni non vanno, alcune hanno solo la lista degli episodi, ma non hanno link,
|
||||
altre cambiano la struttura
|
||||
La sezione novità non fa apparire il titolo degli episodi
|
||||
|
||||
In episodios è stata aggiunta la possibilità di configurare la videoteca
|
||||
|
||||
"""
|
||||
|
||||
import channelselector
|
||||
from specials import autoplay, filtertools
|
||||
from core import scrapertoolsV2, httptools, servertools, tmdb, support
|
||||
import re
|
||||
from core import scrapertoolsV2, httptools, support
|
||||
from core.item import Item
|
||||
from platformcode import logger, config
|
||||
|
||||
__channel__ = "eurostreaming"
|
||||
host = config.get_channel_url(__channel__)
|
||||
headers = ['Referer', host]
|
||||
headers = [['Referer', host]]
|
||||
|
||||
list_servers = ['verystream', 'wstream', 'speedvideo', 'flashx', 'nowvideo', 'streamango', 'deltabit', 'openload']
|
||||
list_quality = ['default']
|
||||
|
||||
__comprueba_enlaces__ = config.get_setting('comprueba_enlaces', 'eurostreaming')
|
||||
__comprueba_enlaces_num__ = config.get_setting('comprueba_enlaces_num', 'eurostreaming')
|
||||
|
||||
IDIOMAS = {'Italiano': 'ITA', 'Sub-ITA':'vosi'}
|
||||
list_language = IDIOMAS.values()
|
||||
|
||||
@support.menu
|
||||
def mainlist(item):
|
||||
#import web_pdb; web_pdb.set_trace()
|
||||
support.log()
|
||||
itemlist = []
|
||||
|
||||
support.menu(itemlist, 'Serie TV', 'serietv', host, contentType = 'tvshow') # mettere sempre episode per serietv, anime!!
|
||||
support.menu(itemlist, 'Serie TV Archivio submenu', 'serietv', host + "/category/serie-tv-archive/", contentType = 'tvshow')
|
||||
support.menu(itemlist, 'Ultimi Aggiornamenti submenu', 'serietv', host + '/aggiornamento-episodi/', args='True', contentType = 'tvshow')
|
||||
support.menu(itemlist, 'Anime / Cartoni', 'serietv', host + '/category/anime-cartoni-animati/', contentType = 'tvshow')
|
||||
support.menu(itemlist, 'Cerca...', 'search', host, contentType = 'tvshow')
|
||||
|
||||
## itemlist = filtertools.show_option(itemlist, item.channel, list_language, list_quality)
|
||||
# richiesto per autoplay
|
||||
autoplay.init(item.channel, list_servers, list_quality)
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
|
||||
support.channel_config(item, itemlist)
|
||||
|
||||
return itemlist
|
||||
|
||||
def serietv(item):
|
||||
#import web_pdb; web_pdb.set_trace()
|
||||
# lista serie tv
|
||||
support.log()
|
||||
itemlist = []
|
||||
if item.args:
|
||||
# il titolo degli episodi viene inglobato in episode ma non sono visibili in newest!!!
|
||||
patron = r'<span class="serieTitle" style="font-size:20px">(.*?).[^–]<a href="([^"]+)"\s+target="_blank">(.*?)<\/a>'
|
||||
listGroups = ['title', 'url', 'title2']
|
||||
patronNext = ''
|
||||
tvshow = [
|
||||
('Archivio ', ['/category/serie-tv-archive/', 'peliculas', '', 'tvshow']),
|
||||
('Aggiornamenti ', ['/aggiornamento-episodi/', 'peliculas', True, 'tvshow'])
|
||||
]
|
||||
anime = ['/category/anime-cartoni-animati/']
|
||||
return locals()
|
||||
|
||||
|
||||
@support.scrape
|
||||
def peliculas(item):
|
||||
support.log()
|
||||
action = 'episodios'
|
||||
if item.args == True:
|
||||
patron = r'<span class="serieTitle" style="font-size:20px">(?P<title>.*?).[^–]<a href="(?P<url>[^"]+)"'\
|
||||
'\s+target="_blank">(?P<episode>\d+x\d+) (?P<title2>.*?)</a>'
|
||||
# permette di vedere episodio e titolo + titolo2 in novità
|
||||
def itemHook(item):
|
||||
item.show = item.episode + item.title
|
||||
return item
|
||||
else:
|
||||
patron = r'<div class="post-thumb">.*?\s<img src="([^"]+)".*?><a href="([^"]+)".*?>(.*?(?:\((\d{4})\)|(\d{4}))?)<\/a><\/h2>'
|
||||
listGroups = ['thumb', 'url', 'title', 'year', 'year']
|
||||
patron = r'<div class="post-thumb">.*?\s<img src="(?P<thumb>[^"]+)".*?>'\
|
||||
'<a href="(?P<url>[^"]+)".*?>(?P<title>.*?(?:\((?P<year>\d{4})\)|(\4\d{4}))?)<\/a><\/h2>'
|
||||
|
||||
patronNext='a class="next page-numbers" href="?([^>"]+)">Avanti »</a>'
|
||||
return locals()
|
||||
|
||||
itemlist = support.scrape(item, patron_block='', patron=patron, listGroups=listGroups,
|
||||
patronNext=patronNext, action='episodios')
|
||||
return itemlist
|
||||
|
||||
|
||||
@support.scrape
|
||||
def episodios(item):
|
||||
## import web_pdb; web_pdb.set_trace()
|
||||
support.log("episodios")
|
||||
itemlist = []
|
||||
|
||||
support.log("episodios: %s" % item)
|
||||
action = 'findvideos'
|
||||
item.contentType = 'episode'
|
||||
# Carica la pagina
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = httptools.downloadpage(item.url, headers=headers).data.replace("'", '"')
|
||||
#========
|
||||
if 'clicca qui per aprire' in data.lower():
|
||||
item.url = scrapertoolsV2.find_single_match(data, '"go_to":"([^"]+)"')
|
||||
item.url = item.url.replace("\\","")
|
||||
# Carica la pagina
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = httptools.downloadpage(item.url, headers=headers).data.replace("'", '"')
|
||||
elif 'clicca qui</span>' in data.lower():
|
||||
item.url = scrapertoolsV2.find_single_match(data, '<h2 style="text-align: center;"><a href="([^"]+)">')
|
||||
# Carica la pagina
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = httptools.downloadpage(item.url, headers=headers).data.replace("'", '"')
|
||||
#=========
|
||||
patron = r'(?:<\/span>\w+ STAGIONE\s\d+ (?:\()?(ITA|SUB ITA)(?:\))?<\/div>'\
|
||||
'<div class="su-spoiler-content su-clearfix" style="display:none">|'\
|
||||
'(?:\s|\Wn)?(?:<strong>)?(\d+&#.*?)(?:|–)?<a\s(.*?)<\/a><br\s\/>)'
|
||||
## '(?:<\/span>\w+ STAGIONE\s\d+ (?:\()?(ITA|SUB ITA)(?:\))?'\
|
||||
## '<\/div><div class="su-spoiler-content su-clearfix" style="display:none">|'\
|
||||
## '(?:\s|\Wn)?(?:<strong>)?(\d[&#].*?)(?:–|\W)?<a\s(.*?)<\/a><br\s\/>)'
|
||||
## '(?:<\/span>\w+ STAGIONE\s\d+ (?:\()?(ITA|SUB ITA)(?:\))?<\/div>'\
|
||||
## '<div class="su-spoiler-content su-clearfix" style="display:none">|'\
|
||||
## '\s(?:<strong>)?(\d[&#].*?)–<a\s(.*?)<\/a><br\s\/>)'
|
||||
listGroups = ['lang', 'title', 'url']
|
||||
itemlist = support.scrape(item, data=data, patron=patron,
|
||||
listGroups=listGroups, action='findvideos')
|
||||
data = re.sub('\n|\t', ' ', data)
|
||||
patronBlock = r'(?P<block>STAGIONE\s\d+ (?:\()?(?P<lang>ITA|SUB ITA)(?:\))?<\/div>.*?)</div></div>'
|
||||
patron = r'(?:\s|\Wn)?(?:|<strong>)?(?P<episode>\d+&#\d+;\d+)(?:|</strong>) (?P<title>.*?)(?:|–)?<a\s(?P<url>.*?)<\/a><br\s\/>'
|
||||
|
||||
# Permette la configurazione della videoteca senza andare nel menu apposito
|
||||
# così si possono Attivare/Disattivare le impostazioni direttamente dalla
|
||||
# pagina delle puntate
|
||||
itemlist.append(
|
||||
Item(channel='setting',
|
||||
action="channel_config",
|
||||
title=support.typo("Configurazione Videoteca color lime"),
|
||||
plot = 'Filtra per lingua utilizzando la configurazione della videoteca.\
|
||||
Escludi i video in sub attivando "Escludi streams... " e aggiungendo sub in Parole',
|
||||
config='videolibrary', #item.channel,
|
||||
folder=False,
|
||||
thumbnail=channelselector.get_thumb('setting_0.png')
|
||||
))
|
||||
|
||||
itemlist = filtertools.get_links(itemlist, item, list_language)
|
||||
return itemlist
|
||||
return locals()
|
||||
|
||||
# =========== def findvideos =============
|
||||
|
||||
def findvideos(item):
|
||||
support.log()
|
||||
itemlist =[]
|
||||
|
||||
# Requerido para FilterTools
|
||||
## itemlist = filtertools.get_links(itemlist, item, list_language)
|
||||
|
||||
itemlist = support.server(item, item.url)
|
||||
## support.videolibrary(itemlist, item)
|
||||
|
||||
return itemlist
|
||||
support.log('findvideos', item)
|
||||
return support.server(item, item.url)
|
||||
|
||||
# =========== def ricerca =============
|
||||
def search(item, texto):
|
||||
support.log()
|
||||
item.url = "%s/?s=%s" % (host, texto)
|
||||
item.contentType = 'tvshow'
|
||||
try:
|
||||
return serietv(item)
|
||||
return peliculas(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
@@ -156,14 +101,14 @@ def newest(categoria):
|
||||
support.log()
|
||||
itemlist = []
|
||||
item = Item()
|
||||
item.contentType= 'episode'
|
||||
item.args= 'True'
|
||||
item.contentType = 'tvshow'
|
||||
item.args = True
|
||||
try:
|
||||
item.url = "%s/aggiornamento-episodi/" % host
|
||||
item.action = "serietv"
|
||||
itemlist = serietv(item)
|
||||
item.action = "peliculas"
|
||||
itemlist = peliculas(item)
|
||||
|
||||
if itemlist[-1].action == "serietv":
|
||||
if itemlist[-1].action == "peliculas":
|
||||
itemlist.pop()
|
||||
|
||||
# Continua la ricerca in caso di errore
|
||||
@@ -174,6 +119,3 @@ def newest(categoria):
|
||||
return []
|
||||
|
||||
return itemlist
|
||||
|
||||
def paginator(item):
|
||||
pass
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "",
|
||||
"thumbnail": "https://i.imgur.com/Orguh85.png",
|
||||
"banner": "",
|
||||
"categories": [
|
||||
"adult"
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://fapality.com'
|
||||
|
||||
@@ -93,6 +92,6 @@ def play(item):
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl in matches:
|
||||
url = scrapedurl
|
||||
itemlist.append(item.clone(action="play", title=url, fulltitle = item.title, url=url))
|
||||
itemlist.append(item.clone(action="play", title=url, contentTitle = item.title, url=url))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -38,8 +38,8 @@ def mainlist(item):
|
||||
|
||||
support.menu(itemlist, 'Novità bold', 'pelicuals_tv', host, 'tvshow')
|
||||
support.menu(itemlist, 'Serie TV bold', 'lista_serie', host, 'tvshow')
|
||||
support.menu(itemlist, 'Archivio A-Z submenu', 'list_az', host, 'tvshow', args=['serie'])
|
||||
support.menu(itemlist, 'Cerca', 'search', host, 'tvshow')
|
||||
('Archivio A-Z ', [, 'list_az', ]), 'tvshow', args=['serie'])
|
||||
|
||||
support.aplay(item, itemlist, list_servers, list_quality)
|
||||
support.channel_config(item, itemlist)
|
||||
|
||||
@@ -208,13 +208,13 @@ def findvideos(item):
|
||||
itemlist = []
|
||||
|
||||
# data = httptools.downloadpage(item.url, headers=headers).data
|
||||
patron_block = '<div class="entry-content">(.*?)<footer class="entry-footer">'
|
||||
# bloque = scrapertools.find_single_match(data, patron_block)
|
||||
patronBlock = '<div class="entry-content">(?P<block>.*)<footer class="entry-footer">'
|
||||
# bloque = scrapertools.find_single_match(data, patronBlock)
|
||||
|
||||
patron = r'<a href="([^"]+)">'
|
||||
# matches = re.compile(patron, re.DOTALL).findall(bloque)
|
||||
|
||||
matches, data = support.match(item, patron, patron_block, headers)
|
||||
matches, data = support.match(item, patron, patronBlock, headers)
|
||||
|
||||
for scrapedurl in matches:
|
||||
if 'is.gd' in scrapedurl:
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.fetishshrine.com'
|
||||
|
||||
|
||||
@@ -31,8 +31,8 @@ def mainlist(item):
|
||||
support.menu(itemlist, 'Film alta definizione bold', 'peliculas', host, contentType='movie', args='film')
|
||||
support.menu(itemlist, 'Categorie Film bold', 'categorias_film', host , contentType='movie', args='film')
|
||||
support.menu(itemlist, 'Categorie Serie bold', 'categorias_serie', host, contentType='tvshow', args='serie')
|
||||
support.menu(itemlist, '[COLOR blue]Cerca Film...[/COLOR] bold', 'search', host, contentType='movie', args='film')
|
||||
support.menu(itemlist, '[COLOR blue]Cerca Serie...[/COLOR] bold', 'search', host, contentType='tvshow', args='serie')
|
||||
|
||||
|
||||
|
||||
autoplay.init(item.channel, list_servers, list_quality)
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
|
||||
@@ -1,15 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
# BLOQUEO ESET INTERNET SECURITY
|
||||
def mainlist(item):
|
||||
@@ -43,7 +40,6 @@ def play(item):
|
||||
itemlist = servertools.find_video_items(data=data)
|
||||
for videoitem in itemlist:
|
||||
videoitem.title = item.title
|
||||
videoitem.fulltitle = item.fulltitle
|
||||
videoitem.thumbnail = item.thumbnail
|
||||
videoitem.channel = item.channel
|
||||
return itemlist
|
||||
|
||||
@@ -4,8 +4,8 @@
|
||||
"active": true,
|
||||
"adult": false,
|
||||
"language": ["ita"],
|
||||
"thumbnail": "https://www.filmpertutti.club/wp-content/themes/blunge/assets/logo.png",
|
||||
"banner": "https://www.filmpertutti.club/wp-content/themes/blunge/assets/logo.png",
|
||||
"thumbnail": "",
|
||||
"banner": "",
|
||||
"categories": ["tvshow","movie"],
|
||||
"settings": [
|
||||
{
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://es.foxtube.com'
|
||||
|
||||
@@ -15,7 +14,9 @@ def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Ultimos" , action="lista", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="PornStar" , action="catalogo", url=host + '/actrices/'))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host))
|
||||
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
@@ -33,6 +34,31 @@ def search(item, texto):
|
||||
return []
|
||||
|
||||
|
||||
def catalogo(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
patron = '<a class="tco5" href="([^"]+)">.*?'
|
||||
patron += 'data-origen="([^"]+)" alt="([^"]+)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
scrapertools.printMatches(matches)
|
||||
for scrapedurl,scrapedthumbnail,scrapedtitle in matches:
|
||||
scrapedplot = ""
|
||||
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail, plot=scrapedplot) )
|
||||
# <a class="bgco2 tco3" rel="next" href="/actrices/2/">></a>
|
||||
next_page = scrapertools.find_single_match(data,'<a class="bgco2 tco3" rel="next" href="([^"]+)">></a>')
|
||||
if next_page!="":
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append(item.clone(action="lista" , title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
@@ -54,6 +80,8 @@ def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
if "/actrices/" in item.url:
|
||||
data=scrapertools.find_single_match(data,'<section class="container">(.*?)>Actrices similares</h3>')
|
||||
patron = '<a class="thumb tco1" href="([^"]+)">.*?'
|
||||
patron += 'src="([^"]+)".*?'
|
||||
patron += 'alt="([^"]+)".*?'
|
||||
@@ -71,7 +99,7 @@ def lista(item):
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot, contentTitle = contentTitle))
|
||||
next_page = scrapertools.find_single_match(data,'<a class="bgco2 tco3" rel="next" href="([^"]+)">></a>')
|
||||
next_page = scrapertools.find_single_match(data,'<a class="bgco2 tco3" rel="next" href="([^"]+)">></a>')
|
||||
if next_page!="":
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append(item.clone(action="lista" , title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
@@ -82,13 +110,14 @@ def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
url = scrapertools.find_single_match(data,'<iframe src="([^"]+)"')
|
||||
url = scrapertools.find_single_match(data,'<iframe title="video" src="([^"]+)"')
|
||||
url = url.replace("https://flashservice.xvideos.com/embedframe/", "https://www.xvideos.com/video") + "/"
|
||||
data = httptools.downloadpage(url).data
|
||||
patron = 'html5player.setVideoHLS\\(\'([^\']+)\''
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl in matches:
|
||||
scrapedurl = scrapedurl.replace("\/", "/")
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
|
||||
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://frprn.com'
|
||||
|
||||
@@ -97,6 +96,6 @@ def play(item):
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl in matches:
|
||||
title = scrapedurl
|
||||
itemlist.append(item.clone(action="play", title=title, fulltitle = scrapedurl, url=scrapedurl))
|
||||
itemlist.append(item.clone(action="play", title=title, contentTitle = scrapedurl, url=scrapedurl))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,16 +1,14 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://freepornstreams.org'
|
||||
host = 'http://freepornstreams.org' #es http://xxxstreams.org
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
@@ -18,8 +16,8 @@ def mainlist(item):
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Peliculas" , action="lista", url=host + "/free-full-porn-movies/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Videos" , action="lista", url=host + "/free-stream-porn/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Canal" , action="catalogo", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="Categoria" , action="categorias", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="Canal" , action="categorias", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
@@ -37,35 +35,24 @@ def search(item, texto):
|
||||
return []
|
||||
|
||||
|
||||
def catalogo(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = scrapertools.find_single_match(data,'>Top Sites</a>(.*?)</aside>')
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
patron = '<li id="menu-item-\d+".*?<a href="([^"]+)">([^"]+)</a></li>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle in matches:
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = ""
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail, plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = scrapertools.find_single_match(data,'Top Tags(.*?)</ul>')
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
if item.title == "Categorias" :
|
||||
data = scrapertools.find_single_match(data,'>Top Tags(.*?)</ul>')
|
||||
else:
|
||||
data = scrapertools.find_single_match(data,'>Top Sites</a>(.*?)</aside>')
|
||||
patron = '<a href="([^"]+)">(.*?)</a>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle in matches:
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = ""
|
||||
scrapedurl = scrapedurl.replace ("http://freepornstreams.org/freepornst/stout.php?s=100,75,65:*&u=" , "")
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail, plot=scrapedplot) )
|
||||
if not "Featured" in scrapedtitle:
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = ""
|
||||
scrapedurl = scrapedurl.replace ("http://freepornstreams.org/freepornst/stout.php?s=100,75,65:*&u=" , "")
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail, plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -79,12 +66,15 @@ def lista(item):
|
||||
patron += '<img src="([^"]+)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,scrapedthumbnail in matches:
|
||||
calidad = scrapertools.find_single_match(scrapedtitle, '(\(.*?\))')
|
||||
title = "[COLOR yellow]" + calidad + "[/COLOR] " + scrapedtitle.replace( "%s" % calidad, "")
|
||||
if '/HD' in scrapedtitle : title= "[COLOR red]" + "HD" + "[/COLOR] " + scrapedtitle
|
||||
elif 'SD' in scrapedtitle : title= "[COLOR red]" + "SD" + "[/COLOR] " + scrapedtitle
|
||||
elif 'FullHD' in scrapedtitle : title= "[COLOR red]" + "FullHD" + "[/COLOR] " + scrapedtitle
|
||||
elif '1080' in scrapedtitle : title= "[COLOR red]" + "1080p" + "[/COLOR] " + scrapedtitle
|
||||
else: title = scrapedtitle
|
||||
thumbnail = scrapedthumbnail.replace("jpg#", "jpg")
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot, fulltitle=title) )
|
||||
itemlist.append( Item(channel=item.channel, action="findvideos", title=title, url=scrapedurl, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot, contentTitle=title) )
|
||||
next_page = scrapertools.find_single_match(data, '<div class="nav-previous"><a href="([^"]+)"')
|
||||
if next_page!="":
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
@@ -92,14 +82,14 @@ def lista(item):
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
def findvideos(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
itemlist = servertools.find_video_items(data=data)
|
||||
for videoitem in itemlist:
|
||||
videoitem.title = item.fulltitle
|
||||
videoitem.fulltitle = item.fulltitle
|
||||
videoitem.thumbnail = item.thumbnail
|
||||
videochannel=item.channel
|
||||
data = re.sub(r"\n|\r|\t|amp;|\s{2}| ", "", data)
|
||||
patron = '<a href="([^"]+)" rel="nofollow"[^<]+>(?:Streaming|Download)'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for url in matches:
|
||||
if not "ubiqfile" in url:
|
||||
itemlist.append(item.clone(action='play',title="%s", url=url))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
|
||||
return itemlist
|
||||
|
||||
|
||||
15
channels/gotporn.json
Executable file
15
channels/gotporn.json
Executable file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"id": "gotporn",
|
||||
"name": "gotporn",
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "https://cdn2-static-cf.gotporn.com/desktop/img/gotporn-logo.png",
|
||||
"banner": "",
|
||||
"categories": [
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
]
|
||||
}
|
||||
|
||||
129
channels/gotporn.py
Executable file
129
channels/gotporn.py
Executable file
@@ -0,0 +1,129 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.gotporn.com'
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mejor valorados" , action="lista", url=host + "/top-rated?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/most-viewed?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Longitud" , action="lista", url=host + "/longest?page=1"))
|
||||
|
||||
itemlist.append( Item(channel=item.channel, title="Canal" , action="catalogo", url=host + "/channels?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories"))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/results?search_query=%s" % texto
|
||||
try:
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
|
||||
|
||||
patron = '<a href="([^"]+)">'
|
||||
patron += '<span class="text">([^<]+)</span>'
|
||||
patron += '<span class="num">([^<]+)</span>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,cantidad in matches:
|
||||
scrapedplot = ""
|
||||
scrapedtitle = "%s %s" % (scrapedtitle,cantidad)
|
||||
scrapedurl = scrapedurl + "?page=1"
|
||||
thumbnail = ""
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=thumbnail , plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def catalogo(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
logger.debug(data)
|
||||
patron = '<header class="clearfix" itemscope>.*?'
|
||||
patron += '<a href="([^"]+)".*?'
|
||||
patron += '<img src="([^"]+)" alt="([^"]+)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedthumbnail,scrapedtitle in matches:
|
||||
scrapedplot = ""
|
||||
scrapedurl = scrapedurl + "?page=1"
|
||||
thumbnail = "https:" + scrapedthumbnail
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=thumbnail , plot=scrapedplot) )
|
||||
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" class="btn btn-secondary"><span class="text">Next')
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append( Item(channel=item.channel, action="catalogo", title="Página Siguiente >>", text_color="blue",
|
||||
url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<li class="video-item poptrigger".*?'
|
||||
patron += 'href="([^"]+)" data-title="([^"]+)".*?'
|
||||
patron += '<span class="duration">(.*?)</span>.*?'
|
||||
patron += 'src=\'([^\']+)\'.*?'
|
||||
patron += '<h3 class="video-thumb-title(.*?)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,scrapedtime,scrapedthumbnail,quality in matches:
|
||||
scrapedtime = scrapedtime.strip()
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
|
||||
if quality:
|
||||
title = "[COLOR yellow]%s[/COLOR] [COLOR red]HD[/COLOR] %s" % (scrapedtime,scrapedtitle)
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot,))
|
||||
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" class="btn btn-secondary')
|
||||
if "categories" in item.url:
|
||||
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" class="btn btn-secondary paginate-show-more')
|
||||
if "search_query" in item.url:
|
||||
next_page = scrapertools.find_single_match(data, '<link rel=\'next\' href="([^"]+)">')
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
|
||||
url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
patron = '<source src="([^"]+)"'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for url in matches:
|
||||
url += "|Referer=%s" % host
|
||||
itemlist.append(item.clone(action="play", title = item.title, url=url ))
|
||||
return itemlist
|
||||
|
||||
@@ -4,15 +4,18 @@
|
||||
# Thanks to Icarus crew & Alfa addon & 4l3x87
|
||||
# ------------------------------------------------------------
|
||||
|
||||
import re
|
||||
"""
|
||||
Problemi noti:
|
||||
- nella pagina categorie appaiono i risultati di tmdb in alcune voci
|
||||
"""
|
||||
|
||||
from core import httptools, scrapertools, support
|
||||
from core import tmdb
|
||||
from core import scrapertoolsV2, httptools, support
|
||||
from core.item import Item
|
||||
from core.support import log
|
||||
from platformcode import logger, config
|
||||
from core.support import log
|
||||
|
||||
__channel__ = 'guardaserieclick'
|
||||
|
||||
host = config.get_channel_url(__channel__)
|
||||
headers = [['Referer', host]]
|
||||
|
||||
@@ -30,34 +33,154 @@ def mainlist(item):
|
||||
|
||||
itemlist = []
|
||||
|
||||
support.menu(itemlist, 'Novità bold', 'serietvaggiornate', "%s/lista-serie-tv" % host, 'tvshow')
|
||||
support.menu(itemlist, 'Nuove serie', 'nuoveserie', "%s/lista-serie-tv" % host, 'tvshow')
|
||||
support.menu(itemlist, 'Serie inedite Sub-ITA', 'nuoveserie', "%s/lista-serie-tv" % host, 'tvshow', args=['inedite'])
|
||||
support.menu(itemlist, 'Da non perdere bold', 'nuoveserie', "%s/lista-serie-tv" % host, 'tvshow', args=['tv', 'da non perdere'])
|
||||
support.menu(itemlist, 'Classiche bold', 'nuoveserie', "%s/lista-serie-tv" % host, 'tvshow', args=['tv', 'classiche'])
|
||||
support.menu(itemlist, 'Anime', 'lista_serie', "%s/category/animazione/" % host, 'tvshow')
|
||||
support.menu(itemlist, 'Categorie', 'categorie', host, 'tvshow', args=['serie'])
|
||||
support.menu(itemlist, 'Cerca', 'search', host, 'tvshow', args=['serie'])
|
||||
support.menu(itemlist, 'Serie', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args=['news'])
|
||||
support.menu(itemlist, 'Ultimi Aggiornamenti submenu', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args= ['update'])
|
||||
support.menu(itemlist, 'Categorie', 'categorie', host, 'tvshow', args=['cat'])
|
||||
support.menu(itemlist, 'Serie inedite Sub-ITA submenu', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args=['inedite'])
|
||||
support.menu(itemlist, 'Da non perdere bold submenu', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args=['tv', 'da non perdere'])
|
||||
support.menu(itemlist, 'Classiche bold submenu', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args=['tv', 'classiche'])
|
||||
support.menu(itemlist, 'Disegni che si muovono sullo schermo per magia bold', 'tvserie', "%s/category/animazione/" % host, 'tvshow', args= ['anime'])
|
||||
|
||||
|
||||
# autoplay
|
||||
support.aplay(item, itemlist, list_servers, list_quality)
|
||||
# configurazione del canale
|
||||
support.channel_config(item, itemlist)
|
||||
|
||||
return itemlist
|
||||
|
||||
@support.scrape
|
||||
def serietv(item):
|
||||
## import web_pdb; web_pdb.set_trace()
|
||||
log('serietv ->\n')
|
||||
##<<<<<<< HEAD
|
||||
##
|
||||
## action = 'episodios'
|
||||
## listGroups = ['url', 'thumb', 'title']
|
||||
## patron = r'<a href="([^"]+)".*?> <img\s.*?src="([^"]+)" \/>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<\/p>'
|
||||
## if 'news' in item.args:
|
||||
## patronBlock = r'<div class="container container-title-serie-new container-scheda" meta-slug="new">(.*?)</div></div><div'
|
||||
## elif 'inedite' in item.args:
|
||||
## patronBlock = r'<div class="container container-title-serie-ined container-scheda" meta-slug="ined">(.*?)</div></div><div'
|
||||
## elif 'da non perdere' in item.args:
|
||||
## patronBlock = r'<div class="container container-title-serie-danonperd container-scheda" meta-slug="danonperd">(.*?)</div></div><div'
|
||||
## elif 'classiche' in item.args:
|
||||
## patronBlock = r'<div class="container container-title-serie-classiche container-scheda" meta-slug="classiche">(.*?)</div></div><div'
|
||||
## elif 'update' in item.args:
|
||||
## listGroups = ['url', 'thumb', 'episode', 'lang', 'title']
|
||||
## patron = r'rel="nofollow" href="([^"]+)"[^>]+> <img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(\d+.\d+) \((.+?)\).<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<[^>]+>'
|
||||
## patronBlock = r'meta-slug="lastep">(.*?)</div></div><div'
|
||||
## # permette di vedere episodio + titolo + titolo2 in novità
|
||||
## def itemHook(item):
|
||||
## item.show = item.episode + item.title
|
||||
## return item
|
||||
## return locals()
|
||||
##
|
||||
##@support.scrape
|
||||
##def tvserie(item):
|
||||
##
|
||||
## action = 'episodios'
|
||||
## listGroups = ['url', 'thumb', 'title']
|
||||
## patron = r'<a\shref="([^"]+)".*?>\s<img\s.*?src="([^"]+)" />[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)</p></div>'
|
||||
## patronBlock = r'<div\sclass="col-xs-\d+ col-sm-\d+-\d+">(.*?)<div\sclass="container-fluid whitebg" style="">'
|
||||
## patronNext = r'<link\s.*?rel="next"\shref="([^"]+)"'
|
||||
##
|
||||
## return locals()
|
||||
##
|
||||
##@support.scrape
|
||||
##def episodios(item):
|
||||
## log('episodios ->\n')
|
||||
## item.contentType = 'episode'
|
||||
##
|
||||
## action = 'findvideos'
|
||||
## listGroups = ['episode', 'lang', 'title2', 'plot', 'title', 'url']
|
||||
## patron = r'class="number-episodes-on-img"> (\d+.\d+)(?:|[ ]\((.*?)\))<[^>]+>'\
|
||||
## '[^>]+>[^>]+>[^>]+>[^>]+>(.*?)<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>'\
|
||||
## '(.*?)<[^>]+></div></div>.<span\s.+?meta-serie="(.*?)" meta-stag=(.*?)</span>'
|
||||
##
|
||||
## return locals()
|
||||
##
|
||||
##=======
|
||||
|
||||
action = 'episodios'
|
||||
listGroups = ['url', 'thumb', 'title']
|
||||
patron = r'<a href="([^"]+)".*?> <img\s.*?src="([^"]+)" \/>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<\/p>'
|
||||
if 'news' in item.args:
|
||||
patron_block = r'<div class="container container-title-serie-new container-scheda" meta-slug="new">(.*?)</div></div><div'
|
||||
elif 'inedite' in item.args:
|
||||
patron_block = r'<div class="container container-title-serie-ined container-scheda" meta-slug="ined">(.*?)</div></div><div'
|
||||
elif 'da non perdere' in item.args:
|
||||
patron_block = r'<div class="container container-title-serie-danonperd container-scheda" meta-slug="danonperd">(.*?)</div></div><div'
|
||||
elif 'classiche' in item.args:
|
||||
patron_block = r'<div class="container container-title-serie-classiche container-scheda" meta-slug="classiche">(.*?)</div></div><div'
|
||||
elif 'update' in item.args:
|
||||
listGroups = ['url', 'thumb', 'episode', 'lang', 'title']
|
||||
patron = r'rel="nofollow" href="([^"]+)"[^>]+> <img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(\d+.\d+) \((.+?)\).<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<[^>]+>'
|
||||
patron_block = r'meta-slug="lastep">(.*?)</div></div><div'
|
||||
# permette di vedere episodio + titolo + titolo2 in novità
|
||||
def itemHook(item):
|
||||
item.show = item.episode + item.title
|
||||
return item
|
||||
return locals()
|
||||
|
||||
@support.scrape
|
||||
def tvserie(item):
|
||||
|
||||
action = 'episodios'
|
||||
listGroups = ['url', 'thumb', 'title']
|
||||
patron = r'<a\shref="([^"]+)".*?>\s<img\s.*?src="([^"]+)" />[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)</p></div>'
|
||||
patron_block = r'<div\sclass="col-xs-\d+ col-sm-\d+-\d+">(.*?)<div\sclass="container-fluid whitebg" style="">'
|
||||
patronNext = r'<link\s.*?rel="next"\shref="([^"]+)"'
|
||||
|
||||
return locals()
|
||||
|
||||
@support.scrape
|
||||
def episodios(item):
|
||||
log('episodios ->\n')
|
||||
item.contentType = 'episode'
|
||||
|
||||
action = 'findvideos'
|
||||
listGroups = ['episode', 'lang', 'title2', 'plot', 'title', 'url']
|
||||
patron = r'class="number-episodes-on-img"> (\d+.\d+)(?:|[ ]\((.*?)\))<[^>]+>'\
|
||||
'[^>]+>[^>]+>[^>]+>[^>]+>(.*?)<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>'\
|
||||
'(.*?)<[^>]+></div></div>.<span\s.+?meta-serie="(.*?)" meta-stag=(.*?)</span>'
|
||||
|
||||
return locals()
|
||||
|
||||
##>>>>>>> a72130e0324ae485ae5f39d3d8f1df46c365fa5b
|
||||
def findvideos(item):
|
||||
log()
|
||||
return support.server(item, item.url)
|
||||
|
||||
@support.scrape
|
||||
def categorie(item):
|
||||
log
|
||||
|
||||
action = 'tvserie'
|
||||
listGroups = ['url', 'title']
|
||||
patron = r'<li>\s<a\shref="([^"]+)"[^>]+>([^<]+)</a></li>'
|
||||
|
||||
patron_block = r'<ul\sclass="dropdown-menu category">(.*?)</ul>'
|
||||
|
||||
|
||||
return locals()
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
##
|
||||
### ----------------------------------------------------------------------------------------------------------------
|
||||
def newest(categoria):
|
||||
log()
|
||||
itemlist = []
|
||||
item = Item()
|
||||
item.contentType= 'episode'
|
||||
item.args = 'update'
|
||||
try:
|
||||
if categoria == "series":
|
||||
item.url = "%s/lista-serie-tv" % host
|
||||
item.action = "serietvaggiornate"
|
||||
itemlist = serietvaggiornate(item)
|
||||
item.action = "serietv"
|
||||
itemlist = serietv(item)
|
||||
|
||||
if itemlist[-1].action == "serietvaggiornate":
|
||||
if itemlist[-1].action == "serietv":
|
||||
itemlist.pop()
|
||||
|
||||
# Continua la ricerca in caso di errore
|
||||
@@ -69,207 +192,18 @@ def newest(categoria):
|
||||
|
||||
return itemlist
|
||||
|
||||
### ================================================================================================================
|
||||
### ----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def search(item, texto):
|
||||
log(texto)
|
||||
item.url = host + "/?s=" + texto
|
||||
item.args = 'cerca'
|
||||
try:
|
||||
return lista_serie(item)
|
||||
return tvserie(item)
|
||||
# Continua la ricerca in caso di errore
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def cleantitle(scrapedtitle):
|
||||
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle.strip()).replace('"', "'")
|
||||
return scrapedtitle.strip()
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
|
||||
def nuoveserie(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
patron_block = ''
|
||||
if 'inedite' in item.args:
|
||||
patron_block = r'<div class="container container-title-serie-ined container-scheda" meta-slug="ined">(.*?)</div></div><div'
|
||||
elif 'da non perdere' in item.args:
|
||||
patron_block = r'<div class="container container-title-serie-danonperd container-scheda" meta-slug="danonperd">(.*?)</div></div><div'
|
||||
elif 'classiche' in item.args:
|
||||
patron_block = r'<div class="container container-title-serie-classiche container-scheda" meta-slug="classiche">(.*?)</div></div><div'
|
||||
else:
|
||||
patron_block = r'<div class="container container-title-serie-new container-scheda" meta-slug="new">(.*?)</div></div><div'
|
||||
|
||||
patron = r'<a href="([^"]+)".*?><img\s.*?src="([^"]+)" \/>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<\/p>'
|
||||
|
||||
matches = support.match(item, patron, patron_block, headers)[0]
|
||||
|
||||
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
|
||||
scrapedtitle = cleantitle(scrapedtitle)
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="episodios",
|
||||
contentType="tvshow",
|
||||
title=scrapedtitle,
|
||||
fulltitle=scrapedtitle,
|
||||
url=scrapedurl,
|
||||
show=scrapedtitle,
|
||||
thumbnail=scrapedthumbnail,
|
||||
folder=True))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
return itemlist
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def serietvaggiornate(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
patron_block = r'<div class="container\s*container-title-serie-lastep\s*container-scheda" meta-slug="lastep">(.*?)<\/div><\/div><div'
|
||||
patron = r'<a rel="nofollow"\s*href="([^"]+)"[^>]+><img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<[^>]+>'
|
||||
|
||||
matches = support.match(item, patron, patron_block, headers)[0]
|
||||
|
||||
for scrapedurl, scrapedthumbnail, scrapedep, scrapedtitle in matches:
|
||||
episode = re.compile(r'^(\d+)x(\d+)', re.DOTALL).findall(scrapedep) # Prendo stagione ed episodioso
|
||||
scrapedtitle = cleantitle(scrapedtitle)
|
||||
|
||||
contentlanguage = ""
|
||||
if 'sub-ita' in scrapedep.strip().lower():
|
||||
contentlanguage = 'Sub-ITA'
|
||||
|
||||
extra = r'<span\s.*?meta-stag="%s" meta-ep="%s" meta-embed="([^"]+)"\s.*?embed2="([^"]+)?"\s.*?embed3="([^"]+)?"[^>]*>' % (
|
||||
episode[0][0], episode[0][1].lstrip("0"))
|
||||
|
||||
infoLabels = {}
|
||||
infoLabels['episode'] = episode[0][1].zfill(2)
|
||||
infoLabels['season'] = episode[0][0]
|
||||
|
||||
title = str(
|
||||
"%s - %sx%s %s" % (scrapedtitle, infoLabels['season'], infoLabels['episode'], contentlanguage)).strip()
|
||||
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="findepvideos",
|
||||
contentType="tvshow",
|
||||
title=title,
|
||||
show=scrapedtitle,
|
||||
fulltitle=scrapedtitle,
|
||||
url=scrapedurl,
|
||||
extra=extra,
|
||||
thumbnail=scrapedthumbnail,
|
||||
contentLanguage=contentlanguage,
|
||||
infoLabels=infoLabels,
|
||||
folder=True))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def categorie(item):
|
||||
log()
|
||||
return support.scrape(item, r'<li>\s<a\shref="([^"]+)"[^>]+>([^<]+)</a></li>', ['url', 'title'], patron_block=r'<ul\sclass="dropdown-menu category">(.*?)</ul>', headers=headers, action="lista_serie")
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def lista_serie(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
patron_block = r'<div\sclass="col-xs-\d+ col-sm-\d+-\d+">(.*?)<div\sclass="container-fluid whitebg" style="">'
|
||||
patron = r'<a\shref="([^"]+)".*?>\s<img\s.*?src="([^"]+)" />[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)</p></div>'
|
||||
|
||||
return support.scrape(item, patron, ['url', 'thumb', 'title'], patron_block=patron_block, patronNext=r"<link\s.*?rel='next'\shref='([^']*)'", action='episodios')
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def episodios(item):
|
||||
log()
|
||||
itemlist = []
|
||||
|
||||
patron = r'<div\sclass="[^"]+">\s([^<]+)<\/div>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)?[^>]+>[^>]+>[^>]+>[^>]+>[^>]+><p[^>]+>([^<]+)<[^>]+>[^>]+>[^>]+>'
|
||||
patron += r'[^"]+".*?serie="([^"]+)".*?stag="([0-9]*)".*?ep="([0-9]*)"\s'
|
||||
patron += r'.*?embed="([^"]+)"\s.*?embed2="([^"]+)?"\s.*?embed3="([^"]+)?"?[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>\s?'
|
||||
patron += r'(?:<img\sclass="[^"]+" meta-src="([^"]+)"[^>]+>|<img\sclass="[^"]+" src="" data-original="([^"]+)"[^>]+>)?'
|
||||
|
||||
matches = support.match(item, patron, headers=headers)[0]
|
||||
|
||||
for scrapedtitle, scrapedepisodetitle, scrapedplot, scrapedserie, scrapedseason, scrapedepisode, scrapedurl, scrapedurl2, scrapedurl3, scrapedthumbnail, scrapedthumbnail2 in matches:
|
||||
scrapedtitle = cleantitle(scrapedtitle)
|
||||
scrapedepisode = scrapedepisode.zfill(2)
|
||||
scrapedepisodetitle = cleantitle(scrapedepisodetitle)
|
||||
title = str("%sx%s %s" % (scrapedseason, scrapedepisode, scrapedepisodetitle)).strip()
|
||||
if 'SUB-ITA' in scrapedtitle:
|
||||
title += " "+support.typo("Sub-ITA", '_ [] color kod')
|
||||
|
||||
infoLabels = {}
|
||||
infoLabels['season'] = scrapedseason
|
||||
infoLabels['episode'] = scrapedepisode
|
||||
itemlist.append(
|
||||
Item(channel=item.channel,
|
||||
action="findvideos",
|
||||
title=support.typo(title, 'bold'),
|
||||
fulltitle=scrapedtitle,
|
||||
url=scrapedurl + "\r\n" + scrapedurl2 + "\r\n" + scrapedurl3,
|
||||
contentType="episode",
|
||||
plot=scrapedplot,
|
||||
contentSerieName=scrapedserie,
|
||||
contentLanguage='Sub-ITA' if 'Sub-ITA' in title else '',
|
||||
infoLabels=infoLabels,
|
||||
thumbnail=scrapedthumbnail2 if scrapedthumbnail2 != '' else scrapedthumbnail,
|
||||
folder=True))
|
||||
|
||||
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
|
||||
|
||||
support.videolibrary(itemlist, item)
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def findepvideos(item):
|
||||
log()
|
||||
data = httptools.downloadpage(item.url, headers=headers, ignore_response_code=True).data
|
||||
matches = scrapertools.find_multiple_matches(data, item.extra)
|
||||
data = "\r\n".join(matches[0])
|
||||
item.contentType = 'movie'
|
||||
return support.server(item, data=data)
|
||||
|
||||
|
||||
# ================================================================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------------------------------------------
|
||||
def findvideos(item):
|
||||
log()
|
||||
if item.contentType == 'tvshow':
|
||||
data = httptools.downloadpage(item.url, headers=headers).data
|
||||
matches = scrapertools.find_multiple_matches(data, item.extra)
|
||||
data = "\r\n".join(matches[0])
|
||||
else:
|
||||
log(item.url)
|
||||
data = item.url
|
||||
return support.server(item, data)
|
||||
|
||||
@@ -4,8 +4,8 @@
|
||||
"active": true,
|
||||
"adult": false,
|
||||
"language": ["ita"],
|
||||
"thumbnail": "https:\/\/guardogratis.com\/wp-content\/uploads\/2018\/01\/Logo-4.png",
|
||||
"bannermenu": "https:\/\/guardogratis.com\/wp-content\/uploads\/2018\/01\/Logo-4.png",
|
||||
"thumbnail": "",
|
||||
"bannermenu": "",
|
||||
"categories": ["movie","tvshow"],
|
||||
"settings": [
|
||||
{
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urllib
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.hclips.com'
|
||||
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urllib
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.hdzog.com'
|
||||
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://hellporno.com'
|
||||
|
||||
@@ -61,7 +60,7 @@ def lista(item):
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
patron = '<div class="video-thumb"><a href="([^"]+)" class="title".*?>([^"]+)</a>.*?'
|
||||
patron += '<span class="time">([^<]+)</span>.*?'
|
||||
patron += '<video poster="([^"]+)"'
|
||||
patron += '<video muted poster="([^"]+)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,duracion,scrapedthumbnail in matches:
|
||||
url = scrapedurl
|
||||
@@ -85,6 +84,6 @@ def play(item):
|
||||
scrapedurl = scrapertools.find_single_match(data,'<source data-fluid-hd src="([^"]+)/?br=\d+"')
|
||||
if scrapedurl=="":
|
||||
scrapedurl = scrapertools.find_single_match(data,'<source src="([^"]+)/?br=\d+"')
|
||||
itemlist.append(item.clone(action="play", title=scrapedurl, fulltitle = item.title, url=scrapedurl))
|
||||
itemlist.append(item.clone(action="play", title=scrapedurl, url=scrapedurl))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "https://dl.dropboxusercontent.com/u/30248079/hentai_id.png",
|
||||
"banner": "https://dl.dropboxusercontent.com/u/30248079/hentai_id2.png",
|
||||
"thumbnail": "http://www.hentai-id.tv/wp-content/themes/moviescript/assets/img/logo.png",
|
||||
"banner": "http://www.hentai-id.tv/wp-content/themes/moviescript/assets/img/background.jpg",
|
||||
"categories": [
|
||||
"adult"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import re
|
||||
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
|
||||
CHANNEL_HOST = "http://hentai-id.tv/"
|
||||
|
||||
@@ -70,11 +68,11 @@ def series(item):
|
||||
action = "episodios"
|
||||
|
||||
for url, thumbnail, title in matches:
|
||||
fulltitle = title
|
||||
contentTitle = title
|
||||
show = title
|
||||
# logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(title, url, thumbnail))
|
||||
itemlist.append(Item(channel=item.channel, action=action, title=title, url=url, thumbnail=thumbnail,
|
||||
show=show, fulltitle=fulltitle, fanart=thumbnail, folder=True))
|
||||
show=show, fanart=thumbnail, folder=True))
|
||||
|
||||
if pagination:
|
||||
page = scrapertools.find_single_match(pagination, '>(?:Page|Página)\s*(\d+)\s*(?:of|de)\s*\d+<')
|
||||
@@ -106,7 +104,7 @@ def episodios(item):
|
||||
|
||||
# logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(title, url, thumbnail))
|
||||
itemlist.append(Item(channel=item.channel, action="findvideos", title=title, url=url,
|
||||
thumbnail=thumbnail, plot=plot, show=item.show, fulltitle="%s %s" % (item.show, title),
|
||||
thumbnail=thumbnail, plot=plot,
|
||||
fanart=thumbnail))
|
||||
|
||||
return itemlist
|
||||
@@ -116,20 +114,33 @@ def findvideos(item):
|
||||
logger.info()
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
video_urls = []
|
||||
down_urls = []
|
||||
patron = '<(?:iframe)?(?:IFRAME)?\s*(?:src)?(?:SRC)?="([^"]+)"'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
for url in matches:
|
||||
if 'goo.gl' in url:
|
||||
if 'goo.gl' in url or 'tinyurl' in url:
|
||||
video = httptools.downloadpage(url, follow_redirects=False, only_headers=True).headers["location"]
|
||||
matches.remove(url)
|
||||
matches.append(video)
|
||||
video_urls.append(video)
|
||||
else:
|
||||
video_urls.append(url)
|
||||
paste = scrapertools.find_single_match(data, 'https://gpaste.us/([a-zA-Z0-9]+)')
|
||||
if paste:
|
||||
try:
|
||||
new_data = httptools.downloadpage('https://gpaste.us/'+paste).data
|
||||
|
||||
bloq = scrapertools.find_single_match(new_data, 'id="input_text">(.*?)</div>')
|
||||
matches = bloq.split('<br>')
|
||||
for url in matches:
|
||||
down_urls.append(url)
|
||||
except:
|
||||
pass
|
||||
video_urls.extend(down_urls)
|
||||
from core import servertools
|
||||
itemlist = servertools.find_video_items(data=",".join(matches))
|
||||
itemlist = servertools.find_video_items(data=",".join(video_urls))
|
||||
for videoitem in itemlist:
|
||||
videoitem.fulltitle = item.fulltitle
|
||||
videoitem.contentTitle = item.contentTitle
|
||||
videoitem.channel = item.channel
|
||||
videoitem.thumbnail = item.thumbnail
|
||||
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urllib
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://hotmovs.com'
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ def mainlist(item):
|
||||
support.menu(itemlist, 'Ultime Uscite', 'peliculas', host + "/category/serie-tv/", "episode")
|
||||
support.menu(itemlist, 'Ultimi Episodi', 'peliculas', host + "/ultimi-episodi/", "episode", 'latest')
|
||||
support.menu(itemlist, 'Categorie', 'menu', host, "episode", args="Serie-Tv per Genere")
|
||||
support.menu(itemlist, 'Cerca...', 'search', host, 'episode', args='serie')
|
||||
|
||||
|
||||
autoplay.init(item.channel, list_servers, [])
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
|
||||
@@ -109,8 +109,7 @@ def menu_info(item):
|
||||
itemlist = []
|
||||
video_urls, data = play(item.clone(extra="play_menu"))
|
||||
itemlist.append(item.clone(action="play", title="Ver -- %s" % item.title, video_urls=video_urls))
|
||||
bloque = scrapertools.find_single_match(data, '<div class="carousel-inner"(.*?)<div class="container">')
|
||||
matches = scrapertools.find_multiple_matches(bloque, 'src="([^"]+)"')
|
||||
matches = scrapertools.find_multiple_matches(data, '<a href="([^"]+)" class="item" rel="screenshots"')
|
||||
for i, img in enumerate(matches):
|
||||
if i == 0:
|
||||
continue
|
||||
|
||||
@@ -10,13 +10,5 @@
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
{
|
||||
"id": "include_in_global_search",
|
||||
"type": "bool",
|
||||
"label": "Incluir en busqueda global",
|
||||
"default": false,
|
||||
"enabled": false,
|
||||
"visible": false
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -6,7 +6,6 @@ from core import httptools
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
|
||||
host = 'http://javus.net/'
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
|
||||
host = 'https://www.javwhores.com/'
|
||||
|
||||
@@ -74,9 +74,13 @@ def lista(item):
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail,
|
||||
plot=plot, contentTitle = title))
|
||||
next_page = scrapertools.find_single_match(data, '<li class="next"><a href="([^"]+)"')
|
||||
if "#videos" in next_page:
|
||||
next_page = scrapertools.find_single_match(data, 'data-parameters="sort_by:post_date;from:(\d+)">Next')
|
||||
next = scrapertools.find_single_match(item.url, '(.*?/)\d+')
|
||||
next_page = next + "%s/" % next_page
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append(item.clone(action="lista", title="Página Siguiente >>" , text_color="blue", url=next_page ) )
|
||||
itemlist.append(item.clone(action="lista", title= next_page, text_color="blue", url=next_page ) )
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -92,7 +96,7 @@ def play(item):
|
||||
if scrapedurl == "" :
|
||||
scrapedurl = scrapertools.find_single_match(data, 'video_url: \'([^\']+)\'')
|
||||
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=scrapedurl, fulltitle=item.title, url=scrapedurl,
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=scrapedurl, url=scrapedurl,
|
||||
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://jizzbunker.com/es'
|
||||
|
||||
@@ -87,7 +86,7 @@ def play(item):
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl in matches:
|
||||
scrapedurl = scrapedurl.replace("https", "http")
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
|
||||
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://xxx.justporno.tv'
|
||||
|
||||
@@ -93,10 +92,6 @@ def lista(item):
|
||||
next_page = "%s?mode=async&function=get_block&block_id=list_videos_common_videos_list" \
|
||||
"&sort_by=post_date&from=%s" % (item.url, next_page)
|
||||
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page))
|
||||
|
||||
# if next_page!="":
|
||||
# next_page = urlparse.urljoin(item.url,next_page)
|
||||
# itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -109,6 +104,6 @@ def play(item):
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl in matches:
|
||||
scrapedplot = ""
|
||||
itemlist.append(item.clone(channel=item.channel, action="play", title=scrapedurl , url=scrapedurl , plot="" , folder=True) )
|
||||
itemlist.append(item.clone(channel=item.channel, action="play", title=item.title , url=scrapedurl , plot="" , folder=True) )
|
||||
return itemlist
|
||||
|
||||
|
||||
15
channels/kingsizetits.json
Executable file
15
channels/kingsizetits.json
Executable file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"id": "kingsizetits",
|
||||
"name": "Kingsizetits",
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "http://cdn.images.kingsizetits.com/resources/kingsizetits.com/rwd_5/default/images/logo.png",
|
||||
"banner": "",
|
||||
"categories": [
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
]
|
||||
}
|
||||
|
||||
95
channels/kingsizetits.py
Executable file
95
channels/kingsizetits.py
Executable file
@@ -0,0 +1,95 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://kingsizetits.com'
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/most-recent/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/most-viewed-week/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mejor valorada" , action="lista", url=host + "/top-rated/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas largos" , action="lista", url=host + "/longest/"))
|
||||
|
||||
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/search/videos/%s/" % texto
|
||||
try:
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<a href="([^"]+)" class="video-box.*?'
|
||||
patron += 'src=\'([^\']+)\' alt=\'([^\']+)\'.*?'
|
||||
patron += 'data-video-count="(\d+)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedthumbnail,scrapedtitle,cantidad in matches:
|
||||
scrapedplot = ""
|
||||
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
|
||||
title = scrapedtitle + " (" + cantidad + ")"
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=title, url=scrapedurl,
|
||||
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail, plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<script>stat.*?'
|
||||
patron += '<a href="([^"]+)".*?'
|
||||
patron += 'src="([^"]+)".*?'
|
||||
patron += '<span class="video-length">([^<]+)</span>.*?'
|
||||
patron += '<span class="pic-name">([^<]+)</span>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedthumbnail,scrapedtime,scrapedtitle in matches:
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
|
||||
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl,
|
||||
fanart=thumbnail, thumbnail=thumbnail, plot=plot, contentTitle = scrapedtitle))
|
||||
next_page = scrapertools.find_single_match(data, '<a class="btn default-btn page-next page-nav" href="([^"]+)"')
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
|
||||
url=next_page) )
|
||||
return itemlist
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
logger.debug(data)
|
||||
url = scrapertools.find_single_match(data,'label:"\d+", file\:"([^"]+)"')
|
||||
itemlist.append(item.clone(action="play", server="directo", url=url ))
|
||||
return itemlist
|
||||
|
||||
|
||||
15
channels/mangovideo.json
Executable file
15
channels/mangovideo.json
Executable file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"id": "mangovideo",
|
||||
"name": "mangovideo",
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "https://mangovideo.pw/images/logo.png",
|
||||
"banner": "",
|
||||
"categories": [
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
]
|
||||
}
|
||||
|
||||
109
channels/mangovideo.py
Executable file
109
channels/mangovideo.py
Executable file
@@ -0,0 +1,109 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
|
||||
server = {'1': 'https://www.mangovideo.pw/contents/videos', '7' : 'https://server9.mangovideo.pw/contents/videos/',
|
||||
'8' : 'https://s10.mangovideo.pw/contents/videos/', '9' : 'https://server2.mangovideo.pw/contents/videos/',
|
||||
'10' : 'https://server217.mangovideo.pw/contents/videos/', '11' : 'https://234.mangovideo.pw/contents/videos/'
|
||||
}
|
||||
|
||||
host = 'http://mangovideo.pw'
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/latest-updates/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/most-popular/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mejor valorada" , action="lista", url=host + "/top-rated/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Sitios" , action="categorias", url=host + "/sites/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/search/%s/" % texto
|
||||
try:
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<a class="item" href="([^"]+)" title="([^"]+)".*?'
|
||||
patron += '<div class="videos">(\d+) videos</div>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,cantidad in matches:
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = ""
|
||||
title = scrapedtitle + " (" + cantidad + ")"
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=title, url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail , plot=scrapedplot) )
|
||||
|
||||
next_page = scrapertools.find_single_match(data, '<li class="next"><a href="([^"]+)"')
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append( Item(channel=item.channel, action="categorias", title="Página Siguiente >>", text_color="blue",
|
||||
url=next_page) )
|
||||
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<div class="item\s+">.*?'
|
||||
patron += '<a href="([^"]+)" title="([^"]+)".*?'
|
||||
patron += 'data-original="([^"]+)".*?'
|
||||
patron += '<div class="duration">([^<]+)</div>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,scrapedthumbnail,scrapedtime in matches:
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl,
|
||||
thumbnail=thumbnail, fanart=thumbnail, plot=plot, contentTitle = scrapedtitle))
|
||||
next_page = scrapertools.find_single_match(data, '<li class="next"><a href="([^"]+)"')
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
|
||||
url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t|amp;|\s{2}| ", "", data)
|
||||
scrapedtitle = ""
|
||||
patron = 'video_url: \'function/0/https://mangovideo.pw/get_file/(\d+)/\w+/(.*?)/\''
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedtitle,url in matches:
|
||||
scrapedtitle = server.get(scrapedtitle, scrapedtitle)
|
||||
url = scrapedtitle + url
|
||||
if not scrapedtitle:
|
||||
url = scrapertools.find_single_match(data, '<div class="embed-wrap".*?<iframe src="([^"]+)\?ref=')
|
||||
itemlist.append(item.clone(action="play", title="%s", url=url))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
|
||||
|
||||
return itemlist
|
||||
|
||||
@@ -30,11 +30,11 @@ def mainlist(item):
|
||||
support.menu(itemlist, 'Sub ITA bold', 'carousel_subita', host, contentType='movie', args='movies')
|
||||
support.menu(itemlist, 'Ultime Richieste Inserite bold', 'carousel_request', host, contentType='movie', args='movies')
|
||||
support.menu(itemlist, 'Film Nelle Sale bold', 'carousel_cinema', host, contentType='movie', args='movies')
|
||||
support.menu(itemlist, 'Film Ultimi Inseriti submenu', 'carousel_last', host, contentType='movie', args='movies')
|
||||
support.menu(itemlist, 'Film Top ImDb submenu', 'top_imdb', host + '/top-imdb/', contentType='movie', args='movies')
|
||||
support.menu(itemlist, 'Serie TV', 'carousel_episodes', host, contentType='episode', args='tvshows')
|
||||
support.menu(itemlist, 'Serie TV Top ImDb submenu', 'top_serie', host + '/top-imdb/', contentType='episode', args='tvshows')
|
||||
support.menu(itemlist, '[COLOR blue]Cerca...[/COLOR] bold', 'search', host)
|
||||
('Film Ultimi Inseriti ', [, 'carousel_last', 'movies'])
|
||||
('Film Top ImDb ', ['/top-imdb/', 'top_imdb', 'movies'])
|
||||
support.menu(itemlist, 'Serie TV', 'carousel_episodes', host, contentTyp='episode', args='tvshows')
|
||||
('Serie TV Top ImDb ', ['/top-imdb/', 'top_serie', 'tvshows'])
|
||||
|
||||
autoplay.init(item.channel, list_servers, list_quality)
|
||||
autoplay.show_option(item.channel, itemlist)
|
||||
|
||||
|
||||
@@ -1,23 +1,22 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://mporno.tv'
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Novedades" , action="peliculas", url=host + "/most-recent/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mejor valoradas" , action="peliculas", url=host + "/top-rated/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas vistas" , action="peliculas", url=host + "/most-viewed/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Longitud" , action="peliculas", url=host + "/longest/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Novedades" , action="lista", url=host + "/most-recent/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mejor valoradas" , action="lista", url=host + "/top-rated/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas vistas" , action="lista", url=host + "/most-viewed/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Longitud" , action="lista", url=host + "/longest/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/channels/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
@@ -28,7 +27,7 @@ def search(item, texto):
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/search/videos/%s/page1.html" % texto
|
||||
try:
|
||||
return peliculas(item)
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
@@ -46,12 +45,12 @@ def categorias(item):
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = ""
|
||||
scrapedtitle = scrapedtitle + " " + cantidad
|
||||
itemlist.append( Item(channel=item.channel, action="peliculas", title=scrapedtitle, url=scrapedurl,
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=scrapedthumbnail , plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def peliculas(item):
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
@@ -61,15 +60,21 @@ def peliculas(item):
|
||||
for scrapedurl,scrapedtitle,scrapedthumbnail in matches:
|
||||
contentTitle = scrapedtitle
|
||||
title = scrapedtitle
|
||||
scrapedurl = scrapedurl.replace("/thumbs/", "/videos/") + ".mp4"
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot, contentTitle=contentTitle))
|
||||
fanart=thumbnail, plot=plot, server= "directo", contentTitle=contentTitle))
|
||||
next_page_url = scrapertools.find_single_match(data,'<a href=\'([^\']+)\' class="next">Next >></a>')
|
||||
if next_page_url!="":
|
||||
next_page_url = urlparse.urljoin(item.url,next_page_url)
|
||||
itemlist.append(item.clone(action="peliculas", title="Página Siguiente >>", text_color="blue", url=next_page_url) )
|
||||
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page_url) )
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
url = item.url.replace("/thumbs/", "/videos/") + ".mp4"
|
||||
itemlist.append( Item(channel=item.channel, action="play", title= item.title, server= "directo", url=url))
|
||||
return itemlist
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.pornburst.xxx'
|
||||
|
||||
@@ -43,20 +42,21 @@ def categorias(item):
|
||||
if "/sites/" in item.url:
|
||||
patron = '<div class="muestra-escena muestra-canales">.*?'
|
||||
patron += 'href="([^"]+)">.*?'
|
||||
patron += 'src="([^"]+)".*?'
|
||||
patron += 'data-src="([^"]+)".*?'
|
||||
patron += '<a title="([^"]+)".*?'
|
||||
patron += '</span> (\d+) videos</span>'
|
||||
if "/pornstars/" in item.url:
|
||||
patron = '<a class="muestra-escena muestra-pornostar" href="([^"]+)">.*?'
|
||||
patron += 'src="([^"]+)".*?'
|
||||
patron += 'data-src="([^"]+)".*?'
|
||||
patron += 'alt="([^"]+)".*?'
|
||||
patron += '</span> (\d+) videos</span>'
|
||||
else:
|
||||
patron = '<a class="muestra-escena muestra-categoria" href="([^"]+)" title="[^"]+">.*?'
|
||||
patron += 'src="([^"]+)".*?'
|
||||
patron += 'data-src="([^"]+)".*?'
|
||||
patron += '</span> ([^"]+) </h2>(.*?)>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedthumbnail,scrapedtitle,cantidad in matches:
|
||||
logger.debug(scrapedurl + ' / ' + scrapedthumbnail + ' / ' + cantidad + ' / ' + scrapedtitle)
|
||||
scrapedplot = ""
|
||||
cantidad = " (" + cantidad + ")"
|
||||
if "</a" in cantidad:
|
||||
@@ -107,6 +107,6 @@ def play(item):
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl in matches:
|
||||
title = scrapedurl
|
||||
itemlist.append(item.clone(action="play", title=title, fulltitle = item.title, url=scrapedurl))
|
||||
itemlist.append(item.clone(action="play", title=title, url=scrapedurl))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -2,13 +2,11 @@
|
||||
|
||||
import base64
|
||||
import hashlib
|
||||
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
|
||||
host = "https://www.nuvid.com"
|
||||
|
||||
@@ -53,7 +51,7 @@ def lista(item):
|
||||
data = httptools.downloadpage(item.url, headers=header, cookies=False).data
|
||||
|
||||
# Extrae las entradas
|
||||
patron = '<div class="box-tumb related_vid">.*?href="([^"]+)" title="([^"]+)".*?src="([^"]+)"(.*?)<i class="time">([^<]+)<'
|
||||
patron = '<div class="box-tumb related_vid.*?href="([^"]+)" title="([^"]+)".*?src="([^"]+)"(.*?)<i class="time">([^<]+)<'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl, scrapedtitle, scrapedthumbnail, quality, duration in matches:
|
||||
scrapedurl = urlparse.urljoin(host, scrapedurl)
|
||||
@@ -85,7 +83,7 @@ def categorias(item):
|
||||
for cat, b in bloques:
|
||||
cat = cat.replace("Straight", "Hetero")
|
||||
itemlist.append(item.clone(action="", title=cat, text_color="gold"))
|
||||
matches = scrapertools.find_multiple_matches(b, '<li.*?href="([^"]+)">(.*?)</span>')
|
||||
matches = scrapertools.find_multiple_matches(b, '<li>.*?href="([^"]+)" >(.*?)</span>')
|
||||
for scrapedurl, scrapedtitle in matches:
|
||||
scrapedtitle = " " + scrapedtitle.replace("<span>", "")
|
||||
scrapedurl = urlparse.urljoin(host, scrapedurl)
|
||||
|
||||
@@ -1,18 +1,17 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# ------------------------------------------------------------
|
||||
import re
|
||||
|
||||
import urlparse
|
||||
import re
|
||||
import base64
|
||||
|
||||
from core import httptools
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import httptools
|
||||
|
||||
host = 'https://pandamovies.pw'
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
@@ -62,7 +61,7 @@ def lista(item):
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = '<div data-movie-id="\d+".*?'
|
||||
patron += '<a href="([^"]+)".*?oldtitle="([^"]+)".*?'
|
||||
patron += '<img src="([^"]+)".*?'
|
||||
patron += '<img data-original="([^"]+)".*?'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for scrapedurl, scrapedtitle, scrapedthumbnail in matches:
|
||||
url = urlparse.urljoin(item.url, scrapedurl)
|
||||
@@ -71,7 +70,6 @@ def lista(item):
|
||||
plot = ""
|
||||
itemlist.append(Item(channel=item.channel, action="findvideos", title=title, url=url, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot, contentTitle=title))
|
||||
# <li class='active'><a class=''>1</a></li><li><a rel='nofollow' class='page larger' href='https://pandamovies.pw/movies/page/2'>
|
||||
next_page = scrapertools.find_single_match(data, '<li class=\'active\'>.*?href=\'([^\']+)\'>')
|
||||
if next_page == "":
|
||||
next_page = scrapertools.find_single_match(data, '<a.*?href="([^"]+)" >Next »</a>')
|
||||
@@ -79,3 +77,34 @@ def lista(item):
|
||||
next_page = urlparse.urljoin(item.url, next_page)
|
||||
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page))
|
||||
return itemlist
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t|amp;|\s{2}| ", "", data)
|
||||
patron = '- on ([^"]+)" href="([^"]+)"'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedtitle,url in matches:
|
||||
if 'aHR0' in url:
|
||||
n = 3
|
||||
while n > 0:
|
||||
url= url.replace("https://vshares.tk/goto/", "").replace("https://waaws.tk/goto/", "").replace("https://openloads.tk/goto/", "")
|
||||
logger.debug (url)
|
||||
url = base64.b64decode(url)
|
||||
n -= 1
|
||||
if "mangovideo" in url: #Aparece como directo
|
||||
data = httptools.downloadpage(url).data
|
||||
patron = 'video_url: \'function/0/https://mangovideo.pw/get_file/(\d+)/\w+/(.*?)/\?embed=true\''
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedtitle,url in matches:
|
||||
if scrapedtitle =="1": scrapedtitle= "https://www.mangovideo.pw/contents/videos/"
|
||||
if scrapedtitle =="7": scrapedtitle= "https://server9.mangovideo.pw/contents/videos/"
|
||||
if scrapedtitle =="8": scrapedtitle= "https://s10.mangovideo.pw/contents/videos/"
|
||||
if scrapedtitle =="10": scrapedtitle= "https://server217.mangovideo.pw/contents/videos/"
|
||||
if scrapedtitle =="11": scrapedtitle= "https://234.mangovideo.pw/contents/videos/"
|
||||
url = scrapedtitle + url
|
||||
itemlist.append( Item(channel=item.channel, action="play", title = "%s", url=url ))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -10,13 +10,5 @@
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
{
|
||||
"id": "include_in_global_search",
|
||||
"type": "bool",
|
||||
"label": "Incluir en busqueda global",
|
||||
"default": false,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,63 +1,85 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import urlparse
|
||||
import re
|
||||
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
|
||||
|
||||
host = 'http://www.pelisxporno.com'
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
|
||||
itemlist = []
|
||||
itemlist.append(item.clone(action="lista", title="Novedades", url="http://www.pelisxporno.com/?order=date"))
|
||||
itemlist.append(item.clone(action="categorias", title="Categorías", url="http://www.pelisxporno.com/categorias/"))
|
||||
itemlist.append(item.clone(action="search", title="Buscar", url="http://www.pelisxporno.com/?s=%s"))
|
||||
|
||||
itemlist.append(item.clone(action="lista", title="Novedades", url= host + "/?order=date"))
|
||||
itemlist.append(item.clone(action="categorias", title="Categorías", url=host + "/categorias/"))
|
||||
itemlist.append(item.clone(action="search", title="Buscar"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info("")
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/?s=%s" % texto
|
||||
try:
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
item.url = item.url % texto
|
||||
return lista(item)
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
# Descarga la pagina
|
||||
data = httptools.downloadpage(item.url).data
|
||||
# Extrae las entradas (carpetas)
|
||||
patron = '<div class="video.".*?<a href="(.*?)" title="(.*?)">.*?<img src="(.*?)".*?\/>.*?duration.*?>(.*?)<'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl, scrapedtitle, scrapedthumbnail, duration in matches:
|
||||
if duration:
|
||||
scrapedtitle += " (%s)" % duration
|
||||
|
||||
itemlist.append(item.clone(action="findvideos", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
|
||||
fanart=scrapedthumbnail))
|
||||
|
||||
# Extrae la marca de siguiente página
|
||||
next_page = scrapertools.find_single_match(data, '<a class="nextpostslink" rel="next" href="([^"]+)"')
|
||||
if next_page:
|
||||
itemlist.append(item.clone(action="lista", title=">> Página Siguiente", url=next_page))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
# Descarga la pagina
|
||||
data = httptools.downloadpage(item.url).data
|
||||
|
||||
# Extrae las entradas (carpetas)
|
||||
patron = '<li class="cat-item cat-item-.*?"><a href="(.*?)".*?>(.*?)<'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl, scrapedtitle in matches:
|
||||
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = '<div class="video.".*?<a href="(.*?)" title="(.*?)">.*?<img src="(.*?)".*?\/>.*?duration.*?>(.*?)<'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl, scrapedtitle, scrapedthumbnail, duration in matches:
|
||||
if duration:
|
||||
scrapedtitle = "[COLOR yellow]" + duration + "[/COLOR] " + scrapedtitle
|
||||
itemlist.append(item.clone(action="findvideos", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
|
||||
fanart=scrapedthumbnail))
|
||||
next_page = scrapertools.find_single_match(data, '<a class="nextpostslink" rel="next" href="([^"]+)"')
|
||||
if next_page:
|
||||
itemlist.append(item.clone(action="lista", title=">> Página Siguiente", url=next_page))
|
||||
return itemlist
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = scrapertools.find_single_match(data, '<div class="video_code">(.*?)<h3')
|
||||
patron = '(?:src|SRC)="([^"]+)"'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl in matches:
|
||||
if not 'mixdrop' in scrapedurl: #el base64 es netu.tv
|
||||
url = "https://hqq.tv/player/embed_player.php?vid=RODE5Z2Hx3hO&autoplay=none"
|
||||
else:
|
||||
url = "https:" + scrapedurl
|
||||
headers = {'Referer': item.url}
|
||||
data = httptools.downloadpage(url, headers=headers).data
|
||||
url = scrapertools.find_single_match(data, 'vsrc = "([^"]+)"')
|
||||
url= "https:" + url
|
||||
itemlist.append(item.clone(action="play", title = "%s", url=url ))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.perfectgirls.net'
|
||||
|
||||
|
||||
@@ -10,13 +10,5 @@
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
{
|
||||
"id": "include_in_global_search",
|
||||
"type": "bool",
|
||||
"label": "Incluir en busqueda global",
|
||||
"default": false,
|
||||
"enabled": false,
|
||||
"visible": false
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,18 +1,17 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import re
|
||||
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
from core import servertools
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
import base64
|
||||
|
||||
host = "https://watchfreexxx.net/"
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
itemlist = []
|
||||
|
||||
@@ -22,47 +21,17 @@ def mainlist(item):
|
||||
itemlist.append(Item(channel=item.channel, title="Escenas", action="lista",
|
||||
url = urlparse.urljoin(host, "category/xxx-scenes/")))
|
||||
|
||||
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=host + '/?s=',
|
||||
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=host+'?s=',
|
||||
thumbnail='https://s30.postimg.cc/pei7txpa9/buscar.png',
|
||||
fanart='https://s30.postimg.cc/pei7txpa9/buscar.png'))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
|
||||
itemlist = []
|
||||
if item.url == '': item.url = host
|
||||
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r'\n|\r|\t| |<br>|\s{2,}', "", data)
|
||||
|
||||
patron = '<article id=.*?<a href="([^"]+)".*?<img data-src="([^"]+)" alt="([^"]+)"'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
|
||||
for data_1, data_2, data_3 in matches:
|
||||
url = data_1
|
||||
thumbnail = data_2
|
||||
title = data_3
|
||||
itemlist.append(Item(channel=item.channel, action='findvideos', title=title, url=url, thumbnail=thumbnail))
|
||||
|
||||
#Paginacion
|
||||
if itemlist != []:
|
||||
actual_page_url = item.url
|
||||
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)">Next</a>')
|
||||
if next_page != '':
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title='Siguiente >>>', url=next_page,
|
||||
thumbnail='https://s16.postimg.cc/9okdu7hhx/siguiente.png', extra=item.extra))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = item.url + texto
|
||||
|
||||
try:
|
||||
if texto != '':
|
||||
item.extra = 'Buscar'
|
||||
@@ -74,3 +43,58 @@ def search(item, texto):
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
if item.url == '': item.url = host
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r'\n|\r|\t| |<br>|\s{2,}', "", data)
|
||||
patron = '<article id=.*?<a href="([^"]+)".*?<img data-src="([^"]+)" alt="([^"]+)"'
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for data_1, data_2, data_3 in matches:
|
||||
url = data_1
|
||||
thumbnail = data_2
|
||||
title = data_3
|
||||
itemlist.append(Item(channel=item.channel, action='findvideos', title=title, url=url, thumbnail=thumbnail))
|
||||
#Paginacion
|
||||
if itemlist != []:
|
||||
actual_page_url = item.url
|
||||
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)">Next</a>')
|
||||
if next_page != '':
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title='Siguiente >>>', url=next_page,
|
||||
thumbnail='https://s16.postimg.cc/9okdu7hhx/siguiente.png', extra=item.extra))
|
||||
return itemlist
|
||||
|
||||
|
||||
def findvideos(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t|amp;|\s{2}| ", "", data)
|
||||
patron = '- on ([^"]+)" href="([^"]+)"'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedtitle,url in matches:
|
||||
if "tk/goto/" in url:
|
||||
n = 3
|
||||
while n > 0:
|
||||
url= url.replace("https://vshares.tk/goto/", "").replace("https://waaws.tk/goto/", "").replace("https://openloads.tk/goto/", "")
|
||||
logger.debug (url)
|
||||
url = base64.b64decode(url)
|
||||
n -= 1
|
||||
if "mangovideo" in url: #Aparece como directo
|
||||
data = httptools.downloadpage(url).data
|
||||
patron = 'video_url: \'function/0/https://mangovideo.pw/get_file/(\d+)/\w+/(.*?)/\?embed=true\''
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedtitle,url in matches:
|
||||
if scrapedtitle =="1": scrapedtitle= "https://www.mangovideo.pw/contents/videos/"
|
||||
if scrapedtitle =="7": scrapedtitle= "https://server9.mangovideo.pw/contents/videos/"
|
||||
if scrapedtitle =="8": scrapedtitle= "https://s10.mangovideo.pw/contents/videos/"
|
||||
if scrapedtitle =="10": scrapedtitle= "https://server217.mangovideo.pw/contents/videos/"
|
||||
if scrapedtitle =="11": scrapedtitle= "https://234.mangovideo.pw/contents/videos/"
|
||||
url = scrapedtitle + url
|
||||
itemlist.append(item.clone(action="play", title = "%s", url=url ))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
|
||||
return itemlist
|
||||
|
||||
|
||||
|
||||
@@ -1,35 +1,31 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.porn300.com'
|
||||
|
||||
#BLOQUEO ANTIVIRUS STREAMCLOUD
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Nuevas" , action="lista", url=host + "/es/videos/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas Vistas" , action="lista", url=host + "/es/mas-vistos/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mejor valorada" , action="lista", url=host + "/es/mas-votados/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Canal" , action="categorias", url=host + "/es/canales/?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Pornstars" , action="categorias", url=host + "/es/pornostars/?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/es/categorias/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Nuevas" , action="lista", url=host + "/en_US/ajax/page/list_videos/?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Canal" , action="categorias", url=host + "/channels/?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Pornstars" , action="categorias", url=host + "/pornstars/?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories/?page=1"))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
# view-source:https://www.porn300.com/en_US/ajax/page/show_search?q=big+tit&page=1
|
||||
# https://www.porn300.com/en_US/ajax/page/show_search?page=2
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/es/buscar/?q=%s" % texto
|
||||
item.url = host + "/en_US/ajax/page/show_search?q=%s&?page=1" % texto
|
||||
try:
|
||||
return lista(item)
|
||||
except:
|
||||
@@ -44,20 +40,18 @@ def categorias(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
patron = '<a itemprop="url" href="([^"]+)".*?'
|
||||
patron += 'title="([^"]+)">.*?'
|
||||
if "/pornostars/" in item.url:
|
||||
patron += '<img itemprop="image" src=([^"]+) alt=.*?'
|
||||
patron += '</svg>([^<]+)<'
|
||||
else:
|
||||
patron += '<img itemprop="image" src="([^"]+)" alt=.*?'
|
||||
patron += '</svg>([^<]+)<'
|
||||
patron = '<a itemprop="url" href="/([^"]+)".*?'
|
||||
patron += 'data-src="([^"]+)" alt=.*?'
|
||||
patron += 'itemprop="name">([^<]+)</h3>.*?'
|
||||
patron += '</svg>([^<]+)<'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,scrapedthumbnail,cantidad in matches:
|
||||
for scrapedurl,scrapedthumbnail,scrapedtitle,cantidad in matches:
|
||||
scrapedplot = ""
|
||||
cantidad = re.compile("\s+", re.DOTALL).sub(" ", cantidad)
|
||||
scrapedtitle = scrapedtitle + " (" + cantidad +")"
|
||||
scrapedurl = urlparse.urljoin(item.url,scrapedurl) + "/?sort=latest"
|
||||
scrapedurl = scrapedurl.replace("channel/", "producer/")
|
||||
scrapedurl = "/en_US/ajax/page/show_" + scrapedurl + "?page=1"
|
||||
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail, plot=scrapedplot) )
|
||||
next_page = scrapertools.find_single_match(data,'<link rel="next" href="([^"]+)" />')
|
||||
@@ -75,22 +69,29 @@ def lista(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
patron = '<a itemprop="url" href="([^"]+)" data-video-id="\d+" title="([^"]+)">.*?'
|
||||
patron += '<img itemprop="thumbnailUrl" src="([^"]+)".*?'
|
||||
patron = '<a itemprop="url" href="([^"]+)".*?'
|
||||
patron += 'data-src="([^"]+)".*?'
|
||||
patron += 'itemprop="name">([^<]+)<.*?'
|
||||
patron += '</svg>([^<]+)<'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,scrapedthumbnail,cantidad in matches:
|
||||
for scrapedurl,scrapedthumbnail,scrapedtitle,scrapedtime in matches:
|
||||
url = urlparse.urljoin(item.url,scrapedurl)
|
||||
cantidad = re.compile("\s+", re.DOTALL).sub(" ", cantidad)
|
||||
title = "[COLOR yellow]" + cantidad + "[/COLOR] " + scrapedtitle
|
||||
scrapedtime = scrapedtime.strip()
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
|
||||
contentTitle = title
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play" , title=title , url=url, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot, contentTitle = contentTitle) )
|
||||
next_page = scrapertools.find_single_match(data,'<link rel="next" href="([^"]+)" />')
|
||||
if next_page!="":
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
prev_page = scrapertools.find_single_match(item.url,"(.*?)page=\d+")
|
||||
num= int(scrapertools.find_single_match(item.url,".*?page=(\d+)"))
|
||||
num += 1
|
||||
num_page = "?page=" + str(num)
|
||||
if num_page!="":
|
||||
next_page = urlparse.urljoin(item.url,num_page)
|
||||
if "show_search" in next_page:
|
||||
next_page = prev_page + num_page
|
||||
next_page = next_page.replace("&?", "&")
|
||||
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
return itemlist
|
||||
|
||||
@@ -101,6 +102,6 @@ def play(item):
|
||||
patron = '<source src="([^"]+)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for url in matches:
|
||||
itemlist.append(item.clone(action="play", title=url, fulltitle = item.title, url=url))
|
||||
itemlist.append(item.clone(action="play", title=url, url=url))
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from core import jsontools as json
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://pornboss.org'
|
||||
|
||||
@@ -15,13 +15,11 @@ def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Peliculas" , action="lista", url=host + "/category/movies/"))
|
||||
itemlist.append( Item(channel=item.channel, title=" categorias" , action="categorias", url=host + "/category/movies/"))
|
||||
|
||||
itemlist.append( Item(channel=item.channel, title="Videos" , action="lista", url=host + "/category/clips/"))
|
||||
itemlist.append( Item(channel=item.channel, title=" categorias" , action="categorias", url=host + "/category/clips/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
@@ -40,13 +38,9 @@ def categorias(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
if "/category/movies/" in item.url:
|
||||
data = scrapertools.find_single_match(data,'>Movies</a>(.*?)</ul>')
|
||||
else:
|
||||
data = scrapertools.find_single_match(data,'>Clips</a>(.*?)</ul>')
|
||||
patron = '<a href=([^"]+)>([^"]+)</a>'
|
||||
data = scrapertools.find_single_match(data,'<div class="uk-panel uk-panel-box widget_nav_menu">(.*?)</ul>')
|
||||
patron = '<li><a href=(.*?) class>([^<]+)</a>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
scrapertools.printMatches(matches)
|
||||
for scrapedurl,scrapedtitle in matches:
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = ""
|
||||
@@ -59,29 +53,39 @@ def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = '<article id=post-\d+.*?'
|
||||
patron += '<img class="center cover" src=([^"]+) alt="([^"]+)".*?'
|
||||
patron += '<blockquote><p> <a href=(.*?) target=_blank'
|
||||
patron = '<article id=item-\d+.*?'
|
||||
patron += '<img class=.*?src=(.*?) alt="([^"]+)".*?'
|
||||
patron += 'Duration:</strong>(.*?) / <strong>.*?'
|
||||
patron += '>SHOW<.*?href=([^"]+) target='
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
scrapertools.printMatches(matches)
|
||||
for scrapedthumbnail,scrapedtitle,scrapedurl in matches:
|
||||
for scrapedthumbnail,scrapedtitle,duration,scrapedurl in matches:
|
||||
scrapedplot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=scrapedtitle, url=scrapedurl,
|
||||
title = "[COLOR yellow]" + duration + "[/COLOR] " + scrapedtitle
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl,
|
||||
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail, plot=scrapedplot) )
|
||||
next_page = scrapertools.find_single_match(data,'<a class=nextpostslink rel=next href=(.*?)>')
|
||||
next_page = scrapertools.find_single_match(data,'<li><a href=([^<]+)><i class=uk-icon-angle-double-right>')
|
||||
next_page = next_page.replace('"', '')
|
||||
if next_page!="":
|
||||
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
data = httptools.downloadpage(item.url).data
|
||||
itemlist = servertools.find_video_items(data=item.url)
|
||||
for videoitem in itemlist:
|
||||
videoitem.title = item.title
|
||||
videoitem.fulltitle = item.fulltitle
|
||||
videoitem.thumbnail = item.thumbnail
|
||||
videoitem.channel = item.channel
|
||||
itemlist = []
|
||||
if "streamcloud" in item.url:
|
||||
itemlist.append(item.clone(action="play", url=item.url ))
|
||||
else:
|
||||
data = httptools.downloadpage(item.url).data
|
||||
url=scrapertools.find_single_match(data,'<span class="bottext">Streamcloud.eu</span>.*?href="([^"]+)"')
|
||||
url= "https://tolink.to" + url
|
||||
data = httptools.downloadpage(url).data
|
||||
patron = '<input type="hidden" name="id" value="([^"]+)">.*?'
|
||||
patron += '<input type="hidden" name="fname" value="([^"]+)">'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for id, url in matches:
|
||||
url= "http://streamcloud.eu/" + id
|
||||
itemlist.append(item.clone(action="play", url=url ))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist)
|
||||
return itemlist
|
||||
|
||||
|
||||
|
||||
15
channels/porndish.json
Executable file
15
channels/porndish.json
Executable file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"id": "porndish",
|
||||
"name": "porndish",
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "https://www.porndish.com/wp-content/uploads/2015/09/logo.png",
|
||||
"banner": "",
|
||||
"categories": [
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
]
|
||||
}
|
||||
|
||||
78
channels/porndish.py
Executable file
78
channels/porndish.py
Executable file
@@ -0,0 +1,78 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.porndish.com'
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="Canal" , action="categorias", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/?s=%s" % texto
|
||||
try:
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<li id="menu-item-\d+".*?'
|
||||
patron += '<a href="([^"]+)">([^<]+)<'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle in matches:
|
||||
scrapedplot = ""
|
||||
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
|
||||
scrapedthumbnail = ""
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail , plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
data = scrapertools.find_single_match(data, 'archive-body">(.*?)<div class="g1-row g1-row-layout-page g1-prefooter">')
|
||||
patron = '<article class=.*?'
|
||||
patron += 'src="([^"]+)".*?'
|
||||
patron += 'title="([^"]+)".*?'
|
||||
patron += '<a href="([^"]+)" rel="bookmark">'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedthumbnail,scrapedtitle,scrapedurl in matches:
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="findvideos", title=scrapedtitle, url=scrapedurl,
|
||||
fanart=thumbnail, thumbnail=thumbnail, plot=plot, contentTitle = scrapedtitle))
|
||||
next_page = scrapertools.find_single_match(data, '<a class="g1-delta g1-delta-1st next" href="([^"]+)">Next</a>')
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
|
||||
url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,12 +1,16 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# ------------------------------------------------------------
|
||||
import urlparse
|
||||
import urllib2
|
||||
import urllib
|
||||
import re
|
||||
|
||||
from core import httptools
|
||||
import os
|
||||
import sys
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://porneq.com'
|
||||
|
||||
@@ -17,7 +21,7 @@ def mainlist(item):
|
||||
itemlist.append(Item(channel=item.channel, title="Ultimos", action="lista", url=host + "/videos/browse/"))
|
||||
itemlist.append(Item(channel=item.channel, title="Mas Vistos", action="lista", url=host + "/videos/most-viewed/"))
|
||||
itemlist.append(Item(channel=item.channel, title="Mas Votado", action="lista", url=host + "/videos/most-liked/"))
|
||||
itemlist.append(Item(channel=item.channel, title="Big Tits", action="lista", url=host + "/show/big+tits&sort=w"))
|
||||
itemlist.append(Item(channel=item.channel, title="Big Tits", action="lista", url=host + "/show/big+tit"))
|
||||
itemlist.append(Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
@@ -46,6 +50,7 @@ def lista(item):
|
||||
matches = re.compile(patron, re.DOTALL).findall(data)
|
||||
for scrapedtitle, scrapedurl, scrapedthumbnail, scrapedtime in matches:
|
||||
scrapedplot = ""
|
||||
scrapedthumbnail = scrapedthumbnail.replace("https:", "http:")
|
||||
scrapedtitle = "[COLOR yellow]" + (scrapedtime) + "[/COLOR] " + scrapedtitle
|
||||
itemlist.append(Item(channel=item.channel, action="play", title=scrapedtitle, url=scrapedurl,
|
||||
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail, plot=scrapedplot))
|
||||
@@ -60,7 +65,8 @@ def play(item):
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
scrapedurl = scrapertools.find_single_match(data, '<source src="([^"]+)"')
|
||||
scrapedurl = scrapedurl.replace("X20", "-")
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
|
||||
Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
|
||||
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
|
||||
return itemlist
|
||||
|
||||
@@ -1,17 +1,18 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
import base64
|
||||
import re
|
||||
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
from core import servertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from platformcode import config, logger
|
||||
from core import httptools
|
||||
|
||||
host = 'http://www.pornhive.tv/en'
|
||||
|
||||
# Algunos link caidos
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
|
||||
@@ -11,13 +11,5 @@
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
{
|
||||
"id": "include_in_global_search",
|
||||
"type": "bool",
|
||||
"label": "Incluir en busqueda global",
|
||||
"default": true,
|
||||
"enabled": true,
|
||||
"visible": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,20 +1,18 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import re
|
||||
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
from core import servertools
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append(Item(channel=item.channel, action="peliculas", title="Novedades", fanart=item.fanart,
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title="Novedades", fanart=item.fanart,
|
||||
url="http://es.pornhub.com/video?o=cm"))
|
||||
itemlist.append(Item(channel=item.channel, action="categorias", title="Categorias", fanart=item.fanart,
|
||||
url="http://es.pornhub.com/categories"))
|
||||
@@ -28,8 +26,7 @@ def search(item, texto):
|
||||
|
||||
item.url = item.url % texto
|
||||
try:
|
||||
return peliculas(item)
|
||||
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
@@ -53,13 +50,13 @@ def categorias(item):
|
||||
else:
|
||||
url = urlparse.urljoin(item.url, scrapedurl + "?o=cm")
|
||||
scrapedtitle = scrapedtitle + " (" + cantidad + ")"
|
||||
itemlist.append(Item(channel=item.channel, action="peliculas", title=scrapedtitle, url=url,
|
||||
itemlist.append(Item(channel=item.channel, action="lista", title=scrapedtitle, url=url,
|
||||
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail))
|
||||
itemlist.sort(key=lambda x: x.title)
|
||||
return itemlist
|
||||
|
||||
|
||||
def peliculas(item):
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
@@ -70,10 +67,11 @@ def peliculas(item):
|
||||
patron += '<var class="duration">([^<]+)</var>(.*?)</div>'
|
||||
matches = re.compile(patron, re.DOTALL).findall(videodata)
|
||||
for url, scrapedtitle, thumbnail, duration, scrapedhd in matches:
|
||||
title = "(" + duration + ") " + scrapedtitle.replace("&amp;", "&")
|
||||
scrapedhd = scrapertools.find_single_match(scrapedhd, '<span class="hd-thumbnail">(.*?)</span>')
|
||||
if scrapedhd == 'HD':
|
||||
title += ' [HD]'
|
||||
if scrapedhd == 'HD':
|
||||
title = "[COLOR yellow]" +duration+ "[/COLOR] " + "[COLOR red]" +scrapedhd+ "[/COLOR] "+scrapedtitle
|
||||
else:
|
||||
title = "[COLOR yellow]" + duration + "[/COLOR] " + scrapedtitle
|
||||
url = urlparse.urljoin(item.url, url)
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="play", title=title, url=url, fanart=thumbnail, thumbnail=thumbnail))
|
||||
@@ -84,19 +82,12 @@ def peliculas(item):
|
||||
if matches:
|
||||
url = urlparse.urljoin(item.url, matches[0].replace('&', '&'))
|
||||
itemlist.append(
|
||||
Item(channel=item.channel, action="peliculas", title=">> Página siguiente", fanart=item.fanart,
|
||||
Item(channel=item.channel, action="lista", title=">> Página siguiente", fanart=item.fanart,
|
||||
url=url))
|
||||
return itemlist
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
patron = '"defaultQuality":true,"format":"mp4","quality":"\d+","videoUrl":"(.*?)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl in matches:
|
||||
url = scrapedurl.replace("\/", "/")
|
||||
itemlist.append(item.clone(action="play", title=url, fulltitle = item.title, url=url))
|
||||
logger.info(item)
|
||||
itemlist = servertools.find_video_items(item.clone(url = item.url))
|
||||
return itemlist
|
||||
|
||||
|
||||
|
||||
15
channels/pornohdmega.json
Executable file
15
channels/pornohdmega.json
Executable file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"id": "pornohdmega",
|
||||
"name": "pornohdmega",
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "https://www.pornohdmega.com/wp-content/uploads/2018/11/dftyu.png",
|
||||
"banner": "",
|
||||
"categories": [
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
]
|
||||
}
|
||||
|
||||
108
channels/pornohdmega.py
Executable file
108
channels/pornohdmega.py
Executable file
@@ -0,0 +1,108 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.pornohdmega.com'
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/?order=recent"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mejor valorados" , action="lista", url=host + "/?order=top-rated"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/?order=most-viewed"))
|
||||
|
||||
itemlist.append( Item(channel=item.channel, title="Canal" , action="catalogo", url=host + "/categories/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "+")
|
||||
item.url = host + "/?s=%s" % texto
|
||||
try:
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<li><a href=\'([^\']+)\' title=\'([^\']+) Tag\'>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle in matches:
|
||||
scrapedplot = ""
|
||||
if not "tag" in scrapedurl:
|
||||
scrapedurl = ""
|
||||
thumbnail = ""
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=thumbnail , plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def catalogo(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<h2><a href="([^"]+)">([^<]+)</a></h2>.*?'
|
||||
patron += '<strong>(\d+) Videos</strong>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,cantidad in matches:
|
||||
scrapedplot = ""
|
||||
scrapedtitle = "%s (%s)" % (scrapedtitle,cantidad)
|
||||
thumbnail = ""
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
|
||||
thumbnail=thumbnail , plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<figure class="video-preview"><a href="([^"]+)".*?'
|
||||
patron += '<img src="([^"]+)".*?'
|
||||
patron += 'title="([^"]+)"'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedthumbnail,scrapedtitle in matches:
|
||||
title = scrapedtitle
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl, thumbnail=thumbnail,
|
||||
fanart=thumbnail, plot=plot,))
|
||||
next_page = scrapertools.find_single_match(data, '<a class="nextpostslink" rel="next" href="([^"]+)"')
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
|
||||
url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
patron = '<iframe src="([^"]+)"'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for url in matches:
|
||||
itemlist.append(item.clone(action="play", title= "%s", url=url))
|
||||
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
|
||||
return itemlist
|
||||
|
||||
@@ -1,16 +1,17 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import re
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import logger
|
||||
from platformcode import config
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.pornrewind.com'
|
||||
|
||||
# hacer funcionar conector Kt player
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
@@ -2,14 +2,12 @@
|
||||
|
||||
import re
|
||||
import urllib
|
||||
|
||||
import urlparse
|
||||
|
||||
from core import httptools
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from platformcode import config, logger
|
||||
from platformcode import config
|
||||
|
||||
host = "https://www.porntrex.com"
|
||||
perpage = 20
|
||||
@@ -27,6 +25,7 @@ def mainlist(item):
|
||||
itemlist.append(item.clone(action="categorias", title="Modelos",
|
||||
url=host + "/models/?mode=async&function=get_block&block_id=list_models_models" \
|
||||
"_list&sort_by=total_videos"))
|
||||
itemlist.append(item.clone(action="categorias", title="Canal", url=host + "/channels/"))
|
||||
itemlist.append(item.clone(action="playlists", title="Listas", url=host + "/playlists/"))
|
||||
itemlist.append(item.clone(action="tags", title="Tags", url=host + "/tags/"))
|
||||
itemlist.append(item.clone(title="Buscar...", action="search"))
|
||||
@@ -59,15 +58,14 @@ def search(item, texto):
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
# Descarga la pagina
|
||||
data = get_data(item.url)
|
||||
|
||||
action = "play"
|
||||
if config.get_setting("menu_info", "porntrex"):
|
||||
action = "menu_info"
|
||||
# Quita las entradas, que no son private
|
||||
patron = '<div class="video-preview-screen video-item thumb-item ".*?<a href="([^"]+)".*?'
|
||||
# Quita las entradas, que no son private <div class="video-preview-screen video-item thumb-item private "
|
||||
patron = '<div class="video-preview-screen video-item thumb-item ".*?'
|
||||
patron += '<a href="([^"]+)".*?'
|
||||
patron += 'data-src="([^"]+)".*?'
|
||||
patron += 'alt="([^"]+)".*?'
|
||||
patron += '<span class="quality">(.*?)<.*?'
|
||||
@@ -120,21 +118,25 @@ def lista(item):
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
# Descarga la pagina
|
||||
data = get_data(item.url)
|
||||
|
||||
# Extrae las entradas
|
||||
patron = '<a class="item" href="([^"]+)" title="([^"]+)".*?src="([^"]+)".*?<div class="videos">([^<]+)<'
|
||||
if "/channels/" in item.url:
|
||||
patron = '<div class="video-item ">.*?<a href="([^"]+)" title="([^"]+)".*?src="([^"]+)".*?<li>([^<]+)<'
|
||||
else:
|
||||
patron = '<a class="item" href="([^"]+)" title="([^"]+)".*?src="([^"]+)".*?<div class="videos">([^<]+)<'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl, scrapedtitle, scrapedthumbnail, videos in matches:
|
||||
if "go.php?" in scrapedurl:
|
||||
scrapedurl = urllib.unquote(scrapedurl.split("/go.php?u=")[1].split("&")[0])
|
||||
scrapedthumbnail = urllib.unquote(scrapedthumbnail.split("/go.php?u=")[1].split("&")[0])
|
||||
scrapedthumbnail += "|Referer=https://www.porntrex.com/"
|
||||
else:
|
||||
scrapedurl = urlparse.urljoin(host, scrapedurl)
|
||||
if not scrapedthumbnail.startswith("https"):
|
||||
scrapedthumbnail = "https:%s" % scrapedthumbnail
|
||||
scrapedthumbnail += "|Referer=https://www.porntrex.com/"
|
||||
scrapedthumbnail = scrapedthumbnail.replace(" " , "%20")
|
||||
if videos:
|
||||
scrapedtitle = "%s (%s)" % (scrapedtitle, videos)
|
||||
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
|
||||
@@ -158,7 +160,10 @@ def playlists(item):
|
||||
# Descarga la pagina
|
||||
data = get_data(item.url)
|
||||
# Extrae las entradas
|
||||
patron = '<div class="item.*?href="([^"]+)" title="([^"]+)".*?data-original="([^"]+)".*?<div class="totalplaylist">([^<]+)<'
|
||||
patron = '<div class="item.*?'
|
||||
patron += 'href="([^"]+)" title="([^"]+)".*?'
|
||||
patron += 'data-original="([^"]+)".*?'
|
||||
patron += '<div class="totalplaylist">([^<]+)<'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
for scrapedurl, scrapedtitle, scrapedthumbnail, videos in matches:
|
||||
if "go.php?" in scrapedurl:
|
||||
@@ -168,6 +173,8 @@ def playlists(item):
|
||||
scrapedurl = urlparse.urljoin(host, scrapedurl)
|
||||
if not scrapedthumbnail.startswith("https"):
|
||||
scrapedthumbnail = "https:%s" % scrapedthumbnail
|
||||
scrapedthumbnail += "|Referer=https://www.porntrex.com/"
|
||||
scrapedthumbnail = scrapedthumbnail.replace(" " , "%20")
|
||||
if videos:
|
||||
scrapedtitle = "%s [COLOR red](%s)[/COLOR]" % (scrapedtitle, videos)
|
||||
itemlist.append(item.clone(action="videos", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
|
||||
@@ -195,7 +202,12 @@ def videos(item):
|
||||
if config.get_setting("menu_info", "porntrex"):
|
||||
action = "menu_info"
|
||||
# Extrae las entradas
|
||||
patron = '<div class="video-item.*?href="([^"]+)".*?title="([^"]+)".*?src="([^"]+)"(.*?)<div class="durations">.*?</i>([^<]+)</div>'
|
||||
# Quita las entradas, que no son private <div class="video-item private ">
|
||||
patron = '<div class="video-item ".*?'
|
||||
patron += 'href="([^"]+)".*?'
|
||||
patron += 'title="([^"]+)".*?'
|
||||
patron += 'src="([^"]+)"(.*?)<div class="durations">.*?'
|
||||
patron += '</i>([^<]+)</div>'
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
count = 0
|
||||
for scrapedurl, scrapedtitle, scrapedthumbnail, quality, duration in matches:
|
||||
@@ -209,60 +221,51 @@ def videos(item):
|
||||
scrapedurl = urlparse.urljoin(host, scrapedurl)
|
||||
if not scrapedthumbnail.startswith("https"):
|
||||
scrapedthumbnail = "https:%s" % scrapedthumbnail
|
||||
if duration:
|
||||
scrapedtitle = "%s - %s" % (duration, scrapedtitle)
|
||||
if '>HD<' in quality:
|
||||
scrapedtitle += " [COLOR red][HD][/COLOR]"
|
||||
scrapedthumbnail += "|Referer=https://www.porntrex.com/"
|
||||
scrapedthumbnail = scrapedthumbnail.replace(" " , "%20")
|
||||
if 'k4"' in quality:
|
||||
quality = "4K"
|
||||
scrapedtitle = "%s - [COLOR yellow]%s[/COLOR] %s" % (duration, quality, scrapedtitle)
|
||||
else:
|
||||
quality = scrapertools.find_single_match(quality, '<span class="quality">(.*?)<.*?')
|
||||
scrapedtitle = "%s - [COLOR red]%s[/COLOR] %s" % (duration, quality, scrapedtitle)
|
||||
if len(itemlist) >= perpage:
|
||||
break;
|
||||
itemlist.append(item.clone(action=action, title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail, contentThumbnail=scrapedthumbnail,
|
||||
fanart=scrapedthumbnail))
|
||||
itemlist.append(item.clone(action=action, title=scrapedtitle, url=scrapedurl, contentThumbnail=scrapedthumbnail,
|
||||
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail))
|
||||
#Extrae la marca de siguiente página
|
||||
if item.channel and len(itemlist) >= perpage:
|
||||
itemlist.append( item.clone(title = "Página siguiente >>>", indexp = count + 1) )
|
||||
|
||||
return itemlist
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
data = get_data(item.url)
|
||||
patron = '(?:video_url|video_alt_url[0-9]*)\s*:\s*\'([^\']+)\'.*?(?:video_url_text|video_alt_url[0-9]*_text)\s*:\s*\'([^\']+)\''
|
||||
patron = '(?:video_url|video_alt_url[0-9]*):\s*\'([^\']+)\'.*?'
|
||||
patron += '(?:video_url_text|video_alt_url[0-9]*_text):\s*\'([^\']+)\''
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
if not matches:
|
||||
patron = '<iframe.*?height="(\d+)".*?video_url\s*:\s*\'([^\']+)\''
|
||||
matches = scrapertools.find_multiple_matches(data, patron)
|
||||
scrapertools.printMatches(matches)
|
||||
for url, quality in matches:
|
||||
if "https" in quality:
|
||||
calidad = url
|
||||
url = quality
|
||||
quality = calidad + "p"
|
||||
|
||||
quality = quality.replace(" HD" , "").replace(" 4k", "")
|
||||
itemlist.append(['.mp4 %s [directo]' % quality, url])
|
||||
|
||||
if item.extra == "play_menu":
|
||||
return itemlist, data
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
def menu_info(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
video_urls, data = play(item.clone(extra="play_menu"))
|
||||
itemlist.append(item.clone(action="play", title="Ver -- %s" % item.title, video_urls=video_urls))
|
||||
|
||||
matches = scrapertools.find_multiple_matches(data, '<img class="thumb lazy-load".*?data-original="([^"]+)"')
|
||||
matches = scrapertools.find_multiple_matches(data, '<img class="thumb lazy-load" src="([^"]+)"')
|
||||
for i, img in enumerate(matches):
|
||||
if i == 0:
|
||||
continue
|
||||
img = urlparse.urljoin(host, img)
|
||||
img += "|Referer=https://www.porntrex.com/"
|
||||
img = "https:" + img + "|Referer=https://www.porntrex.com/"
|
||||
title = "Imagen %s" % (str(i))
|
||||
itemlist.append(item.clone(action="", title=title, thumbnail=img, fanart=img))
|
||||
|
||||
return itemlist
|
||||
|
||||
|
||||
@@ -321,5 +324,4 @@ def get_data(url_orig):
|
||||
post = ""
|
||||
else:
|
||||
break
|
||||
|
||||
return response.data
|
||||
|
||||
15
channels/porntv.json
Executable file
15
channels/porntv.json
Executable file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"id": "porntv",
|
||||
"name": "porntv",
|
||||
"active": true,
|
||||
"adult": true,
|
||||
"language": ["*"],
|
||||
"thumbnail": "https://www.porntv.com/images/dart/logo.png",
|
||||
"banner": "",
|
||||
"categories": [
|
||||
"adult"
|
||||
],
|
||||
"settings": [
|
||||
]
|
||||
}
|
||||
|
||||
104
channels/porntv.py
Executable file
104
channels/porntv.py
Executable file
@@ -0,0 +1,104 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#------------------------------------------------------------
|
||||
import urlparse,urllib2,urllib,re
|
||||
import os, sys
|
||||
from platformcode import config, logger
|
||||
from core import scrapertools
|
||||
from core.item import Item
|
||||
from core import servertools
|
||||
from core import httptools
|
||||
|
||||
host = 'https://www.porntv.com'
|
||||
|
||||
|
||||
def mainlist(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
|
||||
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/videos/straight/all-recent.html"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/videos/straight/all-view.html"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mejor valorada" , action="lista", url=host + "/videos/straight/all-rate.html"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas popular" , action="lista", url=host + "/videos/straight/all-popular.html"))
|
||||
itemlist.append( Item(channel=item.channel, title="Mas largos" , action="lista", url=host + "/videos/straight/all-length.html"))
|
||||
|
||||
|
||||
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories/"))
|
||||
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
|
||||
return itemlist
|
||||
|
||||
|
||||
def search(item, texto):
|
||||
logger.info()
|
||||
texto = texto.replace(" ", "")
|
||||
item.url = host + "/videos/straight/%s-recent.html" % texto
|
||||
try:
|
||||
return lista(item)
|
||||
except:
|
||||
import sys
|
||||
for line in sys.exc_info():
|
||||
logger.error("%s" % line)
|
||||
return []
|
||||
|
||||
|
||||
def categorias(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
data = scrapertools.find_single_match(data, '<h1>Popular Categories</h1>(.*?)<h1>Community</h1>')
|
||||
patron = '<h2><a href="([^"]+)">([^<]+)</a>.*?'
|
||||
patron += 'src="([^"]+)".*?'
|
||||
patron += '<span class="contentquantity">([^<]+)</span>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedtitle,scrapedthumbnail,cantidad in matches:
|
||||
scrapedplot = ""
|
||||
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
|
||||
title = scrapedtitle + " " + cantidad
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title=title, url=scrapedurl,
|
||||
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail , plot=scrapedplot) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def lista(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>|<br/>", "", data)
|
||||
patron = '<div class="item" style="width: 320px">.*?'
|
||||
patron += '<a href="([^"]+)".*?'
|
||||
patron += '<img src="([^"]+)".*?'
|
||||
patron += '>(.*?)<div class="trailer".*?'
|
||||
patron += 'title="([^"]+)".*?'
|
||||
patron += 'clock"></use></svg>([^<]+)</span>'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for scrapedurl,scrapedthumbnail,quality,scrapedtitle,scrapedtime in matches:
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
|
||||
if "flag-hd" in quality:
|
||||
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + "[COLOR red]" + "HD" + "[/COLOR] " + scrapedtitle
|
||||
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
|
||||
thumbnail = scrapedthumbnail
|
||||
plot = ""
|
||||
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl,
|
||||
fanart=thumbnail, thumbnail=thumbnail, plot=plot, contentTitle = scrapedtitle))
|
||||
|
||||
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" class="next"')
|
||||
if next_page:
|
||||
next_page = urlparse.urljoin(item.url,next_page)
|
||||
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
|
||||
url=next_page) )
|
||||
return itemlist
|
||||
|
||||
|
||||
def play(item):
|
||||
logger.info()
|
||||
itemlist = []
|
||||
data = httptools.downloadpage(item.url).data
|
||||
data = re.sub(r"\n|\r|\t| |<br>", "", data)
|
||||
data = scrapertools.find_single_match(data, 'sources: \[(.*?)\]')
|
||||
patron = 'file: "([^"]+)",.*?label: "([^"]+)",'
|
||||
matches = re.compile(patron,re.DOTALL).findall(data)
|
||||
for url,quality in matches:
|
||||
itemlist.append(["%s %s [directo]" % (quality, url), url])
|
||||
return itemlist
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user