Revert "Merge branch 'stable' into stable"

This reverts commit a641beef22, reversing
changes made to 04c9d46a99.
This commit is contained in:
marco
2019-11-01 21:45:34 +01:00
parent a641beef22
commit ed7b5e94e0
201 changed files with 4346 additions and 5094 deletions

View File

@@ -1,228 +0,0 @@
---
name: Test Canale
about: Pagina per il test di un canale
title: ''
labels: Test Canale
assignees: ''
---
Di ogni test mantieni la voce dell'esito e cancella le altre, dove occorre aggiungi informazioni. Specifica, dove possibile, il tipo di problema che incontri in quel test.
Se hai suggerimenti/consigli/dubbi sui test...Proponili e/o chiedi!
***
Test N°.1: Lista Canali
Cosa serve: il file .json
1. Verifica del canale nelle sezioni indicate nel file .json, voce "categories".
- [ ] Tutte
- [ ] Alcune - Indicare le sezioni dove manca il canale
- [ ] Nessuna - Voce Canale mancante nella lista. In questo caso non puoi continuare il test.
2. Icone del canale [ ]
- [ ] Presenti
- [ ] Non Presenti
***
Test N°.2: Configura Canale
1. Presenza della voce "Configura Canale"
- [Si]
- [No]
2. Voci presenti in Configura Canale
a. Cerca Informazioni extra (Default: Attivo)
- [Si]
- [No]
b. Includi in Novità (Default: Attivo)
- [Si]
- [No]
c. Includi in Novità - Italiano (Default: Attivo)
- [Si]
- [No]
d. Includi in ricerca globale (Default: Attivo)
- [Si]
- [No]
e. Verifica se i link esistono (Default: Attivo)
- [Si]
- [No]
f. Numero de link da verificare (Default: 10)
- [Si]
- [No]
g. Mostra link in lingua (Default: Non filtrare)
- [Si]
- [No]
***
Test N°.3: Voci menu nella pagina del Canale
1. Configurazione Autoplay
- [Si]
- [No]
2. Configurazione Canale
- [Si]
- [No]
***
Test N°.4: Confronto Sito - Pagina Canale
Cosa serve: il file .py, consultare la def mainlist()
Promemoria:
della mainlist la struttura è:
( 'Voce menu1', ['/url/', etc, etc])
( 'Voce menu2', ['', etc, etc])
Dove url è una stringa aggiuntiva da aggiungere all'url principale, se in url appare '' allora corrisponde all'indirizzo principale del sito.
Questo Test confronta i titoli che trovi accedendo alle voci di menu del canale con quello che vedi nella corrispettiva pagina del sito.
- [Voce menu con problemi - Tipo di problema] ( copiare per tutte le voci che non hanno corrispondenza )
Tipo di problema = mancano dei titoli, i titoli sono errati, ai titoli corrispondono locandine errate o altro
I test successivi sono divisi a seconda si tratta di film, serie tv o anime.
Cancella le sezioni non interessate dal canale. Verificale dalla voce "categories" del file .json.
**Sezione FILM
Test da effettuare mentre sei nella pagina dei titoli. Per ogni titolo verfica ci siano le voci nel menu contestuale.
1. Aggiungi Film in videoteca
- [Si]
- [No]
Aggiungi 2-3 titoli in videoteca. Verificheremo successivamente la videoteca.
- [Aggiunti correttamente]
- [Indica eventuali problemi] (copia-incolla per tutti i titoli con cui hai avuto il problema)
2. Scarica Film
- [Si]
- [No]
3. Paginazione ( cliccare sulla voce "Successivo" e verifica la 2° pagina nello stesso modo in cui lo hai fatto per la 1°)
- [Ok]
- [X - indica il tipo di problema]
4. Cerca o Cerca Film...
Cerca un titolo a caso in KOD e lo stesso titolo sul sito. Confronta i risultati.
- [Ok]
- [X - indica il tipo di problema]
5. Entra nella pagina del titolo, verifica che come ultima voce ci sia "Aggiungi in videoteca":
- [Si, appare]
- [Non appare]
6. Eventuali problemi riscontrati
- [ scrivi qui il problema/i ]
**Sezione Serie TV
Test da effettuare mentre sei nella pagina dei titoli. Per ogni titolo verfica ci siano le voci nel menu contestuale.
1. Aggiungi Serie in videoteca
- [Si]
- [No]
2. Aggiungi 2-3 titoli in videoteca. Verificheremo successivamente la videoteca.
- [Aggiunti correttamente]
- [Indica eventuali problemi] (copia-incolla per tutti i titoli con cui hai avuto il problema)
3. Scarica Serie
- [Si]
- [No]
4. Cerca o Cerca Serie...
Cerca un titolo a caso in KOD e lo stesso titolo sul sito. Confronta i risultati.
- [Ok]
- [X - indica il tipo di problema]
5. Entra nella pagina della serie, verifica che come ultima voce ci sia "Aggiungi in videoteca":
- [Non appare]
- [Si, appare]
6. Entra nella pagina dell'episodio, NON deve apparire la voce "Aggiungi in videoteca":
- [Non appare]
- [Si, appare]
7. Eventuali problemi riscontrati
- [ scrivi qui il problema/i ]
**Sezione Anime
Test da effettuare mentre sei nella pagina dei titoli. Per ogni titolo verfica ci siano le voci nel menu contestuale.
1. Aggiungi Serie in videoteca
- [Si]
- [No]
2. Aggiungi 2-3 titoli in videoteca. Verificheremo successivamente la videoteca.
- [Aggiunti correttamente]
- [Indica eventuali problemi] (copia-incolla per tutti i titoli con cui hai avuto il problema)
3. Scarica Serie
- [Si]
- [No]
4. Rinumerazione
- [Si]
- [No]
5. Cerca o Cerca Serie...
Cerca un titolo a caso in KOD e lo stesso titolo sul sito. Confronta i risultati.
- [Ok]
- [X - indica il tipo di problema]
6. Entra nella pagina della serie, verifica che come ultima voce ci sia "Aggiungi in videoteca":
- [Si, appare]
- [Non appare]
7. Entra nella pagina dell'episodio, NON deve apparire la voce "Aggiungi in videoteca":
- [Non appare]
- [Si, appare]
8. Eventuali problemi riscontrati
- [ scrivi qui il problema/i ]
**Fine test del canale preso singolarmente!!!

View File

@@ -5,9 +5,8 @@
<import addon="script.module.libtorrent" optional="true"/>
<import addon="metadata.themoviedb.org"/>
<import addon="metadata.tvdb.com"/>
</requires>
<extension library="default.py" point="xbmc.python.pluginsource">
<extension point="xbmc.python.pluginsource" library="default.py">
<provides>video</provides>
</extension>
<extension point="xbmc.addon.metadata">
@@ -20,9 +19,7 @@
<screenshot>resources/media/themes/ss/2.png</screenshot>
<screenshot>resources/media/themes/ss/3.png</screenshot>
</assets>
<news>KoD 0.5
-riscritti molti canali per cambiamenti nella struttura stessa di kod
-altre robe carine</news>
<news>Benvenuto su KOD!</news>
<description lang="it">Naviga velocemente sul web e guarda i contenuti presenti</description>
<disclaimer>[COLOR red]The owners and submitters to this addon do not host or distribute any of the content displayed by these addons nor do they have any affiliation with the content providers.[/COLOR]
[COLOR yellow]Kodi © is a registered trademark of the XBMC Foundation. We are not connected to or in any other way affiliated with Kodi, Team Kodi, or the XBMC Foundation. Furthermore, any software, addons, or products offered by us will receive no support in official Kodi channels, including the Kodi forums and various social networks.[/COLOR]</disclaimer>
@@ -32,6 +29,6 @@
<forum>https://t.me/kodiondemand</forum>
<source>https://github.com/kodiondemand/addon</source>
</extension>
<extension library="videolibrary_service.py" point="xbmc.service" start="login|startup">
<extension point="xbmc.service" library="videolibrary_service.py" start="login|startup">
</extension>
</addon>
</addon>

View File

@@ -1,13 +1,13 @@
{
"altadefinizione01": "https://www.altadefinizione01.cc",
"altadefinizione01_club": "https://www.altadefinizione01.cc",
"altadefinizione01_link": "http://altadefinizione1.com",
"altadefinizione01_link": "http://altadefinizione1.link",
"altadefinizione01": "https://www.altadefinizione01.cc",
"altadefinizioneclick": "https://altadefinizione.cloud",
"altadefinizionehd": "https://altadefinizionetv.best",
"animeforce": "https://ww1.animeforce.org",
"animeforge": "https://ww1.animeforce.org",
"animeleggendari": "https://animepertutti.com",
"animespace": "http://www.animespace.tv",
"animestream": "https://www.animeworld.it",
"animespace": "https://www.animespace.tv",
"animesubita": "http://www.animesubita.org",
"animetubeita": "http://www.animetubeita.com",
"animevision": "https://www.animevision.it",
@@ -17,29 +17,37 @@
"casacinemainfo": "https://www.casacinema.info",
"cb01anime": "https://www.cineblog01.ink",
"cinemalibero": "https://www.cinemalibero.best",
"cinemastreaming": "https://cinemastreaming.icu",
"documentaristreamingda": "https://documentari-streaming-da.com",
"dreamsub": "https://www.dreamsub.stream",
"eurostreaming": "https://eurostreaming.pink",
"eurostreaming_video": "https://www.eurostreaming.best",
"fastsubita": "http://fastsubita.com",
"ffilms":"https://ffilms.org",
"filmigratis": "https://filmigratis.net",
"filmgratis": "https://www.filmaltadefinizione.net",
"filmigratis": "https://filmigratis.org",
"filmontv": "https://www.comingsoon.it",
"filmpertutti": "https://www.filmpertutti.pub",
"filmsenzalimiti": "https://filmsenzalimiti.best",
"filmsenzalimiticc": "https://www.filmsenzalimiti.host",
"filmsenzalimiti_blue": "https://filmsenzalimiti.best",
"filmsenzalimiti_info": "https://www.filmsenzalimiti.host",
"filmstreaming01": "https://filmstreaming01.com",
"filmstreamingita": "http://filmstreamingita.live",
"guarda_serie": "https://guardaserie.site",
"guardafilm": "http://www.guardafilm.top",
"guardarefilm": "https://www.guardarefilm.red",
"guardaserie_stream": "https://guardaserie.co",
"guardaseriecc": "https://guardaserie.site",
"guardaserieclick": "https://www.guardaserie.media",
"guardaserie_stream": "https://guardaserie.co",
"guardaserieonline": "http://www.guardaserie.media",
"guardogratis": "http://guardogratis.net",
"ilgeniodellostreaming": "https://ilgeniodellostreaming.se",
"italiafilm": "https://www.italia-film.pw",
"italiafilmhd": "https://italiafilm.info",
"italiaserie": "https://italiaserie.org",
"itastreaming": "https://itastreaming.film",
"majintoon": "https://toonitalia.org",
"mondolunatico": "http://mondolunatico.org",
"mondolunatico2": "http://mondolunatico.org/stream/",
"mondoserietv": "https://mondoserietv.com",
@@ -48,9 +56,7 @@
"serietvonline": "https://serietvonline.tech",
"serietvsubita": "http://serietvsubita.xyz",
"serietvu": "https://www.serietvu.club",
"streamingaltadefinizione": "https://www.streamingaltadefinizione.me",
"streamtime": "https://t.me/s/StreamTime",
"streamingaltadefinizione": "https://www.streamingaltadefinizione.best",
"tantifilm": "https://www.tantifilm.eu",
"toonitalia": "https://toonitalia.org",
"vedohd": "https://vedohd.icu/"
"toonitalia": "https://toonitalia.org"
}

View File

@@ -1,14 +1,16 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
import re
import urlparse
host = 'https://www.likuoo.video'
from core import httptools
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import logger
from platformcode import config
host = 'http://www.likuoo.video'
def mainlist(item):
@@ -81,20 +83,13 @@ def lista(item):
def play(item):
itemlist = []
logger.info()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|amp;|\s{2}|&nbsp;", "", data)
patron = 'url:\'([^\']+)\'.*?'
patron += 'data:\'([^\']+)\''
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl,post in matches:
post = post.replace("%3D", "=")
scrapedurl = host + scrapedurl
logger.debug( item.url +" , "+ scrapedurl +" , " +post )
datas = httptools.downloadpage(scrapedurl, post=post, headers={'Referer':item.url}).data
datas = datas.replace("\\", "")
url = scrapertools.find_single_match(datas, '<iframe src="([^"]+)"')
itemlist.append( Item(channel=item.channel, action="play", title = "%s", url=url ))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
videoitem.title = item.fulltitle
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videochannel=item.channel
return itemlist

View File

@@ -1,12 +1,14 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
import re
import urllib
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
host = 'http://www.txxx.com'

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://www.absoluporn.es'
@@ -89,7 +90,7 @@ def play(item):
matches = scrapertools.find_multiple_matches(data, patron)
for servervideo,path,filee in matches:
scrapedurl = servervideo + path + "56ea912c4df934c216c352fa8d623af3" + filee
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
itemlist.append(Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
return itemlist

View File

@@ -1,13 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
import base64
from platformcode import logger
from platformcode import config
host = 'http://www.alsoporn.com'
@@ -66,9 +66,8 @@ def lista(item):
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
thumbnail = scrapedthumbnail
plot = ""
if not "0:00" in scrapedtime:
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail,
fanart=thumbnail, plot=plot, contentTitle = scrapedtitle))
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail,
fanart=thumbnail, plot=plot, contentTitle = scrapedtitle))
next_page = scrapertools.find_single_match(data,'<li><a href="([^"]+)" target="_self"><span class="alsoporn_page">NEXT</span></a>')
if next_page!="":
@@ -82,18 +81,11 @@ def play(item):
itemlist = []
data = httptools.downloadpage(item.url).data
scrapedurl = scrapertools.find_single_match(data,'<iframe frameborder=0 scrolling="no" src=\'([^\']+)\'')
data = httptools.downloadpage(scrapedurl).data
data = httptools.downloadpage(item.url).data
scrapedurl1 = scrapertools.find_single_match(data,'<iframe src="(.*?)"')
scrapedurl1 = scrapedurl1.replace("//www.playercdn.com/ec/i2.php?url=", "")
scrapedurl1 = base64.b64decode(scrapedurl1 + "=")
logger.debug(scrapedurl1)
data = httptools.downloadpage(scrapedurl1).data
if "xvideos" in scrapedurl1:
scrapedurl2 = scrapertools.find_single_match(data, 'html5player.setVideoHLS\(\'([^\']+)\'\)')
if "xhamster" in scrapedurl1:
scrapedurl2 = scrapertools.find_single_match(data, '"[0-9]+p":"([^"]+)"').replace("\\", "")
logger.debug(scrapedurl2)
itemlist.append(item.clone(action="play", title=item.title, url=scrapedurl2))
scrapedurl1 = scrapedurl1.replace("//www.playercdn.com/ec/i2.php?", "https://www.trinitytube.xyz/ec/i2.php?")
data = httptools.downloadpage(item.url).data
scrapedurl2 = scrapertools.find_single_match(data,'<source src="(.*?)"')
itemlist.append(item.clone(action="play", title=item.title, fulltitle = item.title, url=scrapedurl2))
return itemlist

View File

@@ -3,116 +3,59 @@
# Canale per altadefinizione01
# ------------------------------------------------------------
"""
DA FINIRE - CONTROLLARE
"""
##from specials import autoplay
from core import support #,servertools
from core import servertools, httptools, tmdb, scrapertoolsV2, support
from core.item import Item
from platformcode import config, logger
from platformcode import logger, config
from specials import autoplay
#URL che reindirizza sempre al dominio corrente
#host = "https://altadefinizione01.to"
__channel__ = "altadefinizione01"
host = config.get_channel_url(__channel__)
headers = [['User-Agent', 'Mozilla/50.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
['Referer', host]]
list_servers = ['verystream','openload','rapidvideo','streamango']
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
list_servers = ['openload', 'streamango', 'rapidvideo', 'streamcherry', 'megadrive']
list_quality = ['default']
checklinks = config.get_setting('checklinks', 'altadefinizione01')
checklinks_number = config.get_setting('checklinks_number', 'altadefinizione01')
headers = [['Referer', host]]
blacklist_categorie = ['Altadefinizione01', 'Altadefinizione.to']
@support.menu
def mainlist(item):
support.log()
itemlist =[]
support.menu(itemlist, 'Al Cinema','peliculas',host+'/cinema/')
support.menu(itemlist, 'Ultimi Film Inseriti','peliculas',host)
support.menu(itemlist, 'Film Sub-ITA','peliculas',host+'/sub-ita/')
support.menu(itemlist, 'Film Ordine Alfabetico ','AZlist',host+'/catalog/')
support.menu(itemlist, 'Categorie Film','categories',host)
support.menu(itemlist, 'Cerca...','search')
film = [
('Al Cinema', ['/cinema/', 'peliculas', 'pellicola']),
('Generi', ['', 'categorie', 'genres']),
('Lettera', ['/catalog/a/', 'categorie', 'orderalf']),
('Anni', ['', 'categorie', 'years']),
('Sub-ITA', ['/sub-ita/', 'peliculas', 'pellicola'])
]
autoplay.init(item.channel, list_servers, list_quality)
autoplay.show_option(item.channel, itemlist)
return locals()
return itemlist
@support.scrape
def peliculas(item):
support.log('peliculas',item)
## support.dbg()
action="findvideos"
if item.args == "search":
patronBlock = r'</script> <div class="boxgrid caption">(?P<block>.*)<div id="right_bar">'
else:
patronBlock = r'<div class="cover_kapsul ml-mask">(?P<block>.*)<div class="page_nav">'
patron = r'<div class="cover boxcaption"> <h2>.<a href="(?P<url>[^"]+)">.*?<.*?src="(?P<thumb>[^"]+)"'\
'.+?[^>]+>[^>]+<div class="trdublaj"> (?P<quality>[A-Z/]+)<[^>]+>(?:.[^>]+>(?P<lang>.*?)<[^>]+>).*?'\
'<p class="h4">(?P<title>.*?)</p>[^>]+> [^>]+> [^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+> [^>]+> '\
'[^>]+>[^>]+>(?P<year>\d{4})[^>]+>[^>]+> [^>]+>[^>]+>(?P<duration>\d+).+?>.*?<p>(?P<plot>[^<]+)<'
patronNext = '<span>\d</span> <a href="([^"]+)">'
## support.regexDbg(item, patron, headers)
return locals()
def categories(item):
support.log(item)
itemlist = support.scrape(item,'<li><a href="([^"]+)">(.*?)</a></li>',['url','title'],headers,'Altadefinizione01',patron_block='<ul class="kategori_list">(.*?)</ul>',action='peliculas')
return support.thumb(itemlist)
@support.scrape
def categorie(item):
support.log('categorie',item)
def AZlist(item):
support.log()
return support.scrape(item,r'<a title="([^"]+)" href="([^"]+)"',['title','url'],headers,patron_block=r'<div class="movies-letter">(.*?)<\/div>',action='peliculas_list')
if item.args != 'orderalf': action = "peliculas"
else: action = 'orderalf'
blacklist = ['Altadefinizione01']
if item.args == 'genres':
patronBlock = r'<ul class="kategori_list">(?P<block>.*)<div class="tab-pane fade" id="wtab2">'
patron = '<li><a href="(?P<url>[^"]+)">(?P<title>.*?)</a>'
elif item.args == 'years':
patronBlock = r'<ul class="anno_list">(?P<block>.*)</a></li> </ul> </div>'
patron = '<li><a href="(?P<url>[^"]+)">(?P<title>.*?)</a>'
elif item.args == 'orderalf':
patronBlock = r'<div class="movies-letter">(?P<block>.*)<div class="clearfix">'
patron = '<a title=.*?href="(?P<url>[^"]+)"><span>(?P<title>.*?)</span>'
#support.regexDbg(item, patronBlock, headers)
return locals()
@support.scrape
def orderalf(item):
support.log('orderalf',item)
action= 'findvideos'
patron = r'<td class="mlnh-thumb"><a href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"'\
'.+?[^>]+>[^>]+ [^>]+[^>]+ [^>]+>(?P<title>[^<]+).*?[^>]+>(?P<year>\d{4})<'\
'[^>]+>[^>]+>(?P<quality>[A-Z]+)[^>]+> <td class="mlnh-5">(?P<lang>.*?)</td>'
patronNext = r'<span>[^<]+</span>[^<]+<a href="(.*?)">'
return locals()
def findvideos(item):
support.log('findvideos', item)
return support.server(item, headers=headers)
def search(item, text):
logger.info("%s mainlist search log: %s %s" % (__channel__, item, text))
itemlist = []
text = text.replace(" ", "+")
item.url = host + "/index.php?do=search&story=%s&subaction=search" % (text)
item.args = "search"
try:
return peliculas(item)
# Cattura la eccezione così non interrompe la ricerca globle se il canale si rompe!
except:
import sys
for line in sys.exc_info():
logger.error("search except %s: %s" % (__channel__, line))
return []
def newest(categoria):
# import web_pdb; web_pdb.set_trace()
support.log(categoria)
itemlist = []
item = Item()
@@ -124,7 +67,7 @@ def newest(categoria):
if itemlist[-1].action == "peliculas":
itemlist.pop()
# Continua la ricerca in caso di errore
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
@@ -132,3 +75,76 @@ def newest(categoria):
return []
return itemlist
def search(item, texto):
support.log(texto)
item.url = "%s/index.php?do=search&story=%s&subaction=search" % (
host, texto)
try:
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def peliculas(item):
support.log()
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
patron = r'<div class="cover_kapsul ml-mask".*?<a href="(.*?)">(.*?)<\/a>.*?<img .*?src="(.*?)".*?<div class="trdublaj">(.*?)<\/div>.(<div class="sub_ita">(.*?)<\/div>|())'
matches = scrapertoolsV2.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, scrapedquality, subDiv, subText, empty in matches:
info = scrapertoolsV2.find_multiple_matches(data, r'<span class="ml-label">([0-9]+)+<\/span>.*?<span class="ml-label">(.*?)<\/span>.*?<p class="ml-cat".*?<p>(.*?)<\/p>.*?<a href="(.*?)" class="ml-watch">')
infoLabels = {}
for infoLabels['year'], duration, scrapedplot, checkUrl in info:
if checkUrl == scrapedurl:
break
infoLabels['duration'] = int(duration.replace(' min', '')) * 60 # calcolo la durata in secondi
scrapedthumbnail = host + scrapedthumbnail
scrapedtitle = scrapertoolsV2.decodeHtmlentities(scrapedtitle)
fulltitle = scrapedtitle
if subDiv:
fulltitle += support.typo(subText + ' _ () color limegreen')
fulltitle += support.typo(scrapedquality.strip()+ ' _ [] color kod')
itemlist.append(
Item(channel=item.channel,
action="findvideos",
contentType=item.contenType,
contentTitle=scrapedtitle,
contentQuality=scrapedquality.strip(),
plot=scrapedplot,
title=fulltitle,
fulltitle=scrapedtitle,
show=scrapedtitle,
url=scrapedurl,
infoLabels=infoLabels,
thumbnail=scrapedthumbnail))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
support.nextPage(itemlist,item,data,'<span>[^<]+</span>[^<]+<a href="(.*?)">')
return itemlist
def peliculas_list(item):
support.log()
item.fulltitle = ''
block = r'<tbody>(.*)<\/tbody>'
patron = r'<a href="([^"]+)" title="([^"]+)".*?> <img.*?src="([^"]+)".*?<td class="mlnh-3">([0-9]{4}).*?mlnh-4">([A-Z]+)'
return support.scrape(item,patron, ['url', 'title', 'thumb', 'year', 'quality'], patron_block=block)
def findvideos(item):
support.log()
itemlist = support.server(item, headers=headers)
return itemlist

View File

@@ -11,6 +11,14 @@
"movie"
],
"settings": [
{
"id": "channel_host",
"type": "text",
"label": "Host del canale",
"default": "https://altadefinizione01.estate/",
"enabled": true,
"visible": true
},
{
"id": "modo_grafico",
"type": "bool",

View File

@@ -3,94 +3,131 @@
# -*- Riscritto per KOD -*-
# -*- By Greko -*-
# -*- last change: 04/05/2019
# -*- doppione di altadefinizione01
from specials import autoplay
from core import servertools, support
from core import channeltools, servertools, support
from core.item import Item
from platformcode import config, logger
from specials import autoplay
__channel__ = "altadefinizione01_club"
host = config.get_channel_url(__channel__)
# ======== Funzionalità =============================
checklinks = config.get_setting('checklinks', __channel__)
checklinks_number = config.get_setting('checklinks_number', __channel__)
headers = [['User-Agent', 'Mozilla/50.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
['Referer', host]]
list_servers = ['verystream','openload','rapidvideo','streamango']
parameters = channeltools.get_channel_parameters(__channel__)
fanart_host = parameters['fanart']
thumbnail_host = parameters['thumbnail']
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
list_servers = ['verystream','openload','supervideo','rapidvideo','streamango'] # per l'autoplay
list_quality = ['default']
@support.menu
# =========== home menu ===================
def mainlist(item):
"""
Creo il menu principale del canale
:param item:
:return: itemlist []
"""
logger.info("%s mainlist log: %s" % (__channel__, item))
itemlist = []
film = [
('Al Cinema', ['/cinema/', 'peliculas', 'pellicola']),
('Generi', ['', 'categorie', 'genres']),
('Lettera', ['/catalog/a/', 'categorie', 'orderalf']),
('Anni', ['', 'categorie', 'years']),
('Sub-ITA', ['/sub-ita/', 'peliculas', 'pellicola'])
]
# Menu Principale
support.menu(itemlist, 'Film Ultimi Arrivi bold', 'peliculas', host, args='pellicola')
support.menu(itemlist, 'Genere', 'categorie', host, args='genres')
support.menu(itemlist, 'Per anno submenu', 'categorie', host, args=['Film per Anno','years'])
support.menu(itemlist, 'Per lettera', 'categorie', host + '/catalog/a/', args=['Film per Lettera','orderalf'])
support.menu(itemlist, 'Al Cinema bold', 'peliculas', host + '/cinema/', args='pellicola')
support.menu(itemlist, 'Sub-ITA bold', 'peliculas', host + '/sub-ita/', args='pellicola')
support.menu(itemlist, 'Cerca film submenu', 'search', host, args = 'search')
return locals()
autoplay.init(item.channel, list_servers, list_quality)
autoplay.show_option(item.channel, itemlist)
support.channel_config(item, itemlist)
return itemlist
# ======== def in ordine di menu ===========================
# =========== def per vedere la lista dei film =============
@support.scrape
def peliculas(item):
## import web_pdb; web_pdb.set_trace()
support.log('peliculas',item)
logger.info("%s mainlist peliculas log: %s" % (__channel__, item))
itemlist = []
action="findvideos"
patron_block = r'<div id="dle-content">(.*?)<div class="page_nav">'
if item.args == "search":
patronBlock = r'</script> <div class="boxgrid caption">(?P<block>.*)<div id="right_bar">'
else:
patronBlock = r'<div class="cover_kapsul ml-mask">(?P<block>.*)<div class="page_nav">'
patron = r'<div class="cover boxcaption"> <h2>.<a href="(?P<url>[^"]+)">.*?<.*?src="(?P<thumb>[^"]+)"'\
'.+?[^>]+>[^>]+<div class="trdublaj"> (?P<quality>[A-Z]+)<[^>]+>(?:.[^>]+>(?P<lang>.*?)<[^>]+>).*?'\
'<p class="h4">(?P<title>.*?)</p>[^>]+> [^>]+> [^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+> [^>]+> '\
'[^>]+>[^>]+>(?P<year>\d{4})[^>]+>[^>]+> [^>]+>[^>]+>(?P<duration>\d+).+?>'
patron_block = r'</table> </form>(.*?)<div class="search_bg">'
patron = r'<h2>.<a href="(.*?)".*?src="(.*?)".*?(?:|<div class="sub_ita">(.*?)</div>)[ ]</div>.*?<p class="h4">(.*?)</p>'
patronNext = '<span>\d</span> <a href="([^"]+)">'
listGroups = ['url', 'thumb', 'lang', 'title', 'year']
return locals()
patronNext = '<span>[^<]+</span>[^<]+<a href="(.*?)">'
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
headers= headers, patronNext=patronNext,patron_block=patron_block,
action='findvideos')
return itemlist
# =========== def pagina categorie ======================================
@support.scrape
def categorie(item):
support.log('categorie',item)
## import web_pdb; web_pdb.set_trace()
if item.args != 'orderalf': action = "peliculas"
else: action = 'orderalf'
blacklist = 'Altadefinizione01'
logger.info("%s mainlist categorie log: %s" % (__channel__, item))
itemlist = []
# da qui fare le opportuni modifiche
patron = r'<li><a href="(.*?)">(.*?)</a>'
action = 'peliculas'
if item.args == 'genres':
patronBlock = r'<ul class="kategori_list">(?P<block>.*)</ul>'
patron = '<li><a href="(?P<url>[^"]+)">(?P<title>.*?)</a>'
elif item.args == 'years':
patronBlock = r'<ul class="anno_list">(?P<block>.*)</ul>'
patron = '<li><a href="(?P<url>[^"]+)">(?P<title>.*?)</a>'
elif item.args == 'orderalf':
patronBlock = r'<div class="movies-letter">(.*)<div class="clearfix">'
patron = '<a title=.*?href="(?P<url>[^"]+)"><span>(?P<title>.*?)</span>'
bloque = r'<ul class="kategori_list">(.*?)</ul>'
elif item.args[1] == 'years':
bloque = r'<ul class="anno_list">(.*?)</ul>'
elif item.args[1] == 'orderalf':
bloque = r'<div class="movies-letter">(.*)<div class="clearfix">'
patron = r'<a title=.*?href="(.*?)"><span>(.*?)</span>'
action = 'orderalf'
listGroups = ['url', 'title']
patronNext = ''
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
headers= headers, patronNext=patronNext, patron_block = bloque,
action=action)
return itemlist
return locals()
# =========== def pagina lista alfabetica ===============================
@support.scrape
def orderalf(item):
support.log('orderalf',item)
logger.info("%s mainlist orderalf log: %s" % (__channel__, item))
itemlist = []
action= 'findvideos'
patron = r'<td class="mlnh-thumb"><a href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"'\
'.+?[^>]+>[^>]+ [^>]+[^>]+ [^>]+>(?P<title>[^<]+).*?[^>]+>(?P<year>\d{4})<'\
'[^>]+>[^>]+>(?P<quality>[A-Z]+)[^>]+> <td class="mlnh-5">(?P<lang>.*?)</td>'
patronNext = r'<span>[^<]+</span>[^<]+<a href="(.*?)">'
listGroups = ['url', 'title', 'thumb', 'year', 'lang']
patron = r'<td class="mlnh-thumb"><a href="(.*?)".title="(.*?)".*?src="(.*?)".*?mlnh-3">(.*?)<.*?"mlnh-5">.<(.*?)<td' #scrapertools.find_single_match(data, '<td class="mlnh-thumb"><a href="(.*?)".title="(.*?)".*?src="(.*?)".*?mlnh-3">(.*?)<.*?"mlnh-5">.<(.*?)<td')
patronNext = r'<span>[^<]+</span>[^<]+<a href="(.*?)">'
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
headers= headers, patronNext=patronNext,
action='findvideos')
return itemlist
return locals()
# =========== def pagina del film con i server per verderlo =============
def findvideos(item):
support.log('findvideos', item)
logger.info("%s mainlist findvideos_film log: %s" % (__channel__, item))
itemlist = []
return support.server(item, headers=headers)
# =========== def per cercare film/serietv =============
@@ -100,7 +137,7 @@ def search(item, text):
itemlist = []
text = text.replace(" ", "+")
item.url = host + "/index.php?do=search&story=%s&subaction=search" % (text)
item.args = "search"
#item.extra = "search"
try:
return peliculas(item)
# Cattura la eccezione così non interrompe la ricerca globle se il canale si rompe!
@@ -113,17 +150,16 @@ def search(item, text):
# =========== def per le novità nel menu principale =============
def newest(categoria):
support.log(categoria)
logger.info("%s mainlist newest log: %s" % (__channel__, categoria))
itemlist = []
item = Item()
try:
if categoria == "peliculas":
item.url = host
item.action = "peliculas"
itemlist = peliculas(item)
item.url = host
item.action = "peliculas"
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":
itemlist.pop()
if itemlist[-1].action == "peliculas":
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys

View File

@@ -4,11 +4,24 @@
"active": true,
"adult": false,
"language": ["ita"],
"bannermenu": "altadefinizione01link.png",
"banner": "altadefinizione01link.png",
"fanart": "altadefinizione01link.png",
"categories": ["movie","vosi"],
"fanart": "https://altadefinizione01.estate/templates/Dark/img/nlogo.png",
"thumbnail": "https://altadefinizione01.estate/templates/Dark/img/nlogo.png",
"banner": "https://altadefinizione01.estate/templates/Dark/img/nlogo.png",
"fix" : "reimpostato url e modificato file per KOD",
"change_date": "2019-30-04",
"categories": [
"movie",
"vosi"
],
"settings": [
{
"id": "channel_host",
"type": "text",
"label": "Host del canale",
"default": "https://altadefinizione01.estate/",
"enabled": true,
"visible": true
},
{
"id": "modo_grafico",
"type": "bool",

View File

@@ -2,78 +2,97 @@
# -*- Channel Altadefinizione01L Film - Serie -*-
# -*- By Greko -*-
import channelselector
from specials import autoplay
from core import servertools, support#, jsontools
from core import servertools, support, jsontools
from core.item import Item
from platformcode import config, logger
__channel__ = "altadefinizione01_link"
# ======== def per utility INIZIO ============================
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
list_servers = ['supervideo', 'streamcherry','rapidvideo', 'streamango', 'openload']
list_quality = ['default']
# =========== home menu ===================
@support.menu
def mainlist(item):
## support.dbg()
host = config.get_setting("channel_host", __channel__)
film = [
('Al Cinema', ['/film-del-cinema', 'peliculas', '']),
('Generi', ['', 'genres', 'genres']),
('Anni', ['', 'genres', 'years']),
('Qualità', ['/piu-visti.html', 'genres', 'quality']),
('Mi sento fortunato', ['/piu-visti.html', 'genres', 'lucky']),
('Popolari', ['/piu-visti.html', 'peliculas', '']),
('Sub-ITA', ['/film-sub-ita/', 'peliculas', ''])
]
headers = [['Referer', host]]
# =========== home menu ===================
def mainlist(item):
"""
Creo il menu principale del canale
:param item:
:return: itemlist []
"""
support.log()
itemlist = []
# Menu Principale
support.menu(itemlist, 'Novità bold', 'peliculas', host)
support.menu(itemlist, 'Film per Genere', 'genres', host, args='genres')
support.menu(itemlist, 'Film per Anno submenu', 'genres', host, args='years')
support.menu(itemlist, 'Film per Qualità submenu', 'genres', host, args='quality')
support.menu(itemlist, 'Al Cinema bold', 'peliculas', host + '/film-del-cinema')
support.menu(itemlist, 'Popolari bold', 'peliculas', host + '/piu-visti.html')
support.menu(itemlist, 'Mi sento fortunato bold', 'genres', host, args='lucky')
support.menu(itemlist, 'Sub-ITA bold', 'peliculas', host + '/film-sub-ita/')
support.menu(itemlist, 'Cerca film submenu', 'search', host)
# per autoplay
autoplay.init(item.channel, list_servers, list_quality)
autoplay.show_option(item.channel, itemlist)
support.channel_config(item, itemlist)
## search = ''
return locals()
return itemlist
# ======== def in ordine di action dal menu ===========================
@support.scrape
def peliculas(item):
## support.dbg()
support.log('peliculas',item)
support.log
itemlist = []
## patron = r'class="innerImage">.*?href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"'\
## '.*?class="ml-item-title">(?P<title>[^<]+)</.*?class="ml-item-label"> '\
## '(?P<year>\d{4}) <.*?class="ml-item-label"> (?P<duration>\d+) .*?'\
## 'class="ml-item-label ml-item-label-.+?"> (?P<quality>.+?) <.*?'\
## 'class="ml-itelangm-label"> (?P<lang>.+?) </'
patron = r'class="innerImage">.*?href="([^"]+)".*?src="([^"]+)"'\
'.*?class="ml-item-title">([^<]+)</.*?class="ml-item-label"> (\d{4}) <'\
'.*?class="ml-item-label">.*?class="ml-item-label ml-item-label-.+?"> '\
'(.+?) </div>.*?class="ml-item-label"> (.+?) </'
listGroups = ['url', 'thumb', 'title', 'year', 'quality', 'lang']
patron = r'class="innerImage">.*?href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+> (?P<year>\d{4})[^>]+>[^>]+> (?P<duration>\d+)[^>]+>[^>]+> (?P<quality>[a-zA-Z\\]+)[^>]+>[^>]+> (?P<lang>.*?) [^>]+>'
patronNext = '<span>\d</span> <a href="([^"]+)">'
## debug = True
return locals()
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
headers= headers, patronNext=patronNext,
action='findvideos')
return itemlist
# =========== def pagina categorie ======================================
@support.scrape
def genres(item):
support.log('genres',item)
def genres(item):
support.log
itemlist = []
#data = httptools.downloadpage(item.url, headers=headers).data
action = 'peliculas'
## item.contentType = 'movie'
if item.args == 'genres':
patronBlock = r'<ul class="listSubCat" id="Film">(?P<block>.*)</ul>'
bloque = r'<ul class="listSubCat" id="Film">(.*?)</ul>'
elif item.args == 'years':
patronBlock = r'<ul class="listSubCat" id="Anno">(?P<block>.*)</ul>'
bloque = r'<ul class="listSubCat" id="Anno">(.*?)</ul>'
elif item.args == 'quality':
patronBlock = r'<ul class="listSubCat" id="Qualita">(?P<block>.*)</ul>'
bloque = r'<ul class="listSubCat" id="Qualita">(.*?)</ul>'
elif item.args == 'lucky': # sono i titoli random nella pagina, cambiano 1 volta al dì
patronBlock = r'FILM RANDOM.*?class="listSubCat">(?P<block>.*)</ul>'
bloque = r'FILM RANDOM.*?class="listSubCat">(.*?)</ul>'
action = 'findvideos'
## item.args = ''
patron = r'<li><a href="(?P<url>[^"]+)">(?P<title>[^<]+)<'
return locals()
patron = r'<li><a href="([^"]+)">(.*?)<'
listGroups = ['url','title']
itemlist = support.scrape(item, patron=patron, listGroups=listGroups,
headers= headers, patron_block = bloque,
action=action)
return itemlist
# =========== def per cercare film/serietv =============
#host+/index.php?do=search&story=avatar&subaction=search
@@ -84,7 +103,7 @@ def search(item, text):
item.url = host+"/index.php?do=search&story=%s&subaction=search" % (text)
try:
return peliculas(item)
# Se captura la excepcion, para no interrumpir al buscador global si un canal falla
# Se captura la excepciÛn, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
@@ -115,4 +134,14 @@ def newest(categoria):
return itemlist
def findvideos(item):
return support.server(item, headers=headers)
support.log()
itemlist = support.server(item, headers=headers)
# Requerido para FilterTools
# itemlist = filtertools.get_links(itemlist, item, list_language)
# Requerido para AutoPlay
autoplay.start(itemlist, item)
return itemlist

View File

@@ -3,60 +3,41 @@
# Canale per altadefinizioneclick
# ----------------------------------------------------------
from specials import autoplay
import re
from core import servertools, support
from core.item import Item
from platformcode import config, logger
from platformcode import logger, config
from specials import autoplay
#host = config.get_setting("channel_host", 'altadefinizioneclick')
__channel__ = 'altadefinizioneclick'
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
list_servers = ['verystream', 'openload', 'streamango', "vidoza", "thevideo", "okru", 'youtube']
list_quality = ['1080p']
@support.menu
checklinks = config.get_setting('checklinks', 'altadefinizioneclick')
checklinks_number = config.get_setting('checklinks_number', 'altadefinizioneclick')
headers = [['Referer', host]]
def mainlist(item):
support.log()
support.log()
itemlist = []
film = '' #'/nuove-uscite/'
filmSub = [
('Novità', ['/nuove-uscite/', 'peliculas']),
('Al Cinema', ['/film-del-cinema', 'peliculas']),
('Generi', ['', 'menu', 'Film']),
('Anni', ['', 'menu', 'Anno']),
('Qualità', ['', 'menu', 'Qualita']),
('Sub-ITA', ['/sub-ita/', 'peliculas'])
]
support.menu(itemlist, 'Film', 'peliculas', host + "/nuove-uscite/")
support.menu(itemlist, 'Per Genere submenu', 'menu', host, args='Film')
support.menu(itemlist, 'Per Anno submenu', 'menu', host, args='Anno')
support.menu(itemlist, 'Sub-ITA', 'peliculas', host + "/sub-ita/")
support.menu(itemlist, 'Cerca...', 'search', host, 'movie')
support.aplay(item, itemlist,list_servers, list_quality)
support.channel_config(item, itemlist)
return locals()
return itemlist
@support.scrape
def menu(item):
support.log()
action='peliculas'
patron = r'<li><a href="(?P<url>[^"]+)">(?P<title>[^<]+)</a></li>'
patronBlock= r'<ul class="listSubCat" id="'+ str(item.args) + '">(?P<block>.*)</ul>'
return locals()
@support.scrape
def peliculas(item):
support.log()
if item.extra == 'search':
patron = r'<a href="(?P<url>[^"]+)">\s*<div class="wrapperImage">(?:<span class="hd">(?P<quality>[^<]+)'\
'<\/span>)?<img[^s]+src="(?P<thumb>[^"]+)"[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)<[^<]+>'\
'(?:.*?IMDB:\s(\2[^<]+)<\/div>)?'
else:
patron = r'<img width[^s]+src="(?P<thumb>[^"]+)[^>]+><\/a>.*?<a href="(?P<url>[^"]+)">(?P<title>[^(?:\]|<)]+)'\
'(?:\[(?P<lang>[^\]]+)\])?<\/a>[^>]+>[^>]+>[^>]+>(?:\sIMDB\:\s(?P<rating>[^<]+)<)?'\
'(?:.*?<span class="hd">(?P<quality>[^<]+)<\/span>)?\s*<a'
# in caso di CERCA si apre la maschera di inserimento dati
patronNext = r'<a class="next page-numbers" href="([^"]+)">'
return locals()
def search(item, texto):
support.log("search ", texto)
@@ -96,6 +77,36 @@ def newest(categoria):
return itemlist
def menu(item):
support.log()
itemlist = support.scrape(item, '<li><a href="([^"]+)">([^<]+)</a></li>', ['url', 'title'], headers, patron_block='<ul class="listSubCat" id="'+ str(item.args) + '">(.*?)</ul>', action='peliculas')
return support.thumb(itemlist)
def peliculas(item):
support.log()
if item.extra == 'search':
patron = r'<a href="([^"]+)">\s*<div class="wrapperImage">(?:<span class="hd">([^<]+)<\/span>)?<img[^s]+src="([^"]+)"[^>]+>[^>]+>[^>]+>([^<]+)<[^<]+>(?:.*?IMDB:\s([^<]+)<\/div>)?'
elements = ['url', 'quality', 'thumb', 'title', 'rating']
else:
patron = r'<img width[^s]+src="([^"]+)[^>]+><\/a>.*?<a href="([^"]+)">([^(?:\]|<)]+)(?:\[([^\]]+)\])?<\/a>[^>]+>[^>]+>[^>]+>(?:\sIMDB\:\s([^<]+)<)?(?:.*?<span class="hd">([^<]+)<\/span>)?\s*<a'
elements =['thumb', 'url', 'title','lang', 'rating', 'quality']
itemlist = support.scrape(item, patron, elements, headers, patronNext='<a class="next page-numbers" href="([^"]+)">')
return itemlist
def findvideos(item):
support.log('findvideos', item)
return support.hdpass_get_servers(item)
support.log()
itemlist = support.hdpass_get_servers(item)
if checklinks:
itemlist = servertools.check_list_links(itemlist, checklinks_number)
# itemlist = filtertools.get_links(itemlist, item, list_language)
autoplay.start(itemlist, item)
support.videolibrary(itemlist, item ,'color kod bold')
return itemlist

View File

@@ -1,13 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://www.analdin.com/es'
@@ -108,6 +108,6 @@ def play(item):
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl in matches:
url = scrapedurl
itemlist.append(item.clone(action="play", title=url, url=url))
itemlist.append(item.clone(action="play", title=url, fulltitle = item.title, url=url))
return itemlist

View File

@@ -4,17 +4,17 @@
"language": ["ita"],
"active": true,
"adult": false,
"thumbnail": "animeforce.png",
"banner": "animeforce.png",
"thumbnail": "http://www.animeforce.org/wp-content/uploads/2013/05/logo-animeforce.png",
"banner": "http://www.animeforce.org/wp-content/uploads/2013/05/logo-animeforce.png",
"categories": ["anime"],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Includi in Ricerca Globale",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": true
"visible": false
},
{
"id": "include_in_newest_anime",
@@ -31,39 +31,6 @@
"default": true,
"enabled": true,
"visible": true
},
{
"id": "checklinks",
"type": "bool",
"label": "Verifica se i link esistono",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "checklinks_number",
"type": "list",
"label": "Numero de link da verificare",
"default": 1,
"enabled": true,
"visible": "eq(-1,true)",
"lvalues": [ "1", "3", "5", "10" ]
},
{
"id": "autorenumber",
"type": "bool",
"label": "@70712",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "autorenumber_mode",
"type": "bool",
"label": "@70688",
"default": false,
"enabled": true,
"visible": "eq(-1,true)"
}
}
]
}

View File

@@ -1,132 +1,505 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per AnimeForce
# Ringraziamo Icarus crew
# Canale per http://animeinstreaming.net/
# ------------------------------------------------------------
import re
import urllib
import urlparse
from core import httptools, scrapertools, servertools, tmdb
from core.item import Item
from platformcode import config, logger
from servers.decrypters import adfly
from core import support
__channel__ = "animeforce"
host = support.config.get_channel_url(__channel__)
__channel__ = "animeforge"
host = config.get_channel_url(__channel__)
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
list_servers = ['directo', 'openload']
list_quality = ['default']
checklinks = support.config.get_setting('checklinks', __channel__)
checklinks_number = support.config.get_setting('checklinks_number', __channel__)
headers = [['Referer', host]]
PERPAGE = 20
@support.menu
# -----------------------------------------------------------------
def mainlist(item):
anime = ['/lista-anime/',
('In Corso',['/lista-anime-in-corso/']),
('Ultimi Episodi',['','peliculas','update']),
('Ultime Serie',['/category/anime/articoli-principali/','peliculas','last'])
]
return locals()
log("mainlist", "mainlist", item.channel)
itemlist = [Item(channel=item.channel,
action="lista_anime",
title="[COLOR azure]Anime [/COLOR]- [COLOR lightsalmon]Lista Completa[/COLOR]",
url=host + "/lista-anime/",
thumbnail=CategoriaThumbnail,
fanart=CategoriaFanart),
Item(channel=item.channel,
action="animeaggiornati",
title="[COLOR azure]Anime Aggiornati[/COLOR]",
url=host,
thumbnail=CategoriaThumbnail,
fanart=CategoriaFanart),
Item(channel=item.channel,
action="ultimiep",
title="[COLOR azure]Ultimi Episodi[/COLOR]",
url=host,
thumbnail=CategoriaThumbnail,
fanart=CategoriaFanart),
Item(channel=item.channel,
action="search",
title="[COLOR yellow]Cerca ...[/COLOR]",
thumbnail="http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search")]
return itemlist
# =================================================================
# -----------------------------------------------------------------
def newest(categoria):
support.log(categoria)
log("newest", "newest" + categoria)
itemlist = []
item = support.Item()
item = Item()
try:
if categoria == "anime":
item.url = host
item.args = 'update'
itemlist = peliculas(item)
item.action = "ultimiep"
itemlist = ultimiep(item)
if itemlist[-1].action == "peliculas":
if itemlist[-1].action == "ultimiep":
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("{0}".format(line))
logger.error("{0}".format(line))
return []
return itemlist
@support.scrape
# =================================================================
# -----------------------------------------------------------------
def search(item, texto):
search = texto
item.contentType = 'tvshow'
patron = '<strong><a href="(?P<url>[^"]+)">(?P<title>.*?) [Ss][Uu][Bb]'
action = 'episodios'
return locals()
log("search", "search", item.channel)
item.url = host + "/?s=" + texto
try:
return search_anime(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
@support.scrape
def peliculas(item):
anime = True
if item.args == 'update':
patron = r'src="(?P<thumb>[^"]+)" class="attachment-grid-post[^"]+" alt="[^"]*" title="(?P<title>[^"]+").*?<h2><a href="(?P<url>[^"]+)"'
def itemHook(item):
delete = support.scrapertoolsV2.find_single_match(item.fulltitle, r'( Episodio.*)')
number = support.scrapertoolsV2.find_single_match(item.title, r'Episodio (\d+)')
item.title = support.typo(number + ' - ','bold') + item.title.replace(delete,'')
item.fulltitle = item.show = item.fulltitle.replace(delete,'')
item.url = item.url.replace('-episodio-'+ number,'')
item.number = number
return item
action = 'findvideos'
# =================================================================
elif item.args == 'last':
patron = r'src="(?P<thumb>[^"]+)" class="attachment-grid-post[^"]+" alt="[^"]*" title="(?P<title>.*?)(?: Sub| sub| SUB|").*?<h2><a href="(?P<url>[^"]+)"'
action = 'episodios'
# -----------------------------------------------------------------
def search_anime(item):
log("search_anime", "search_anime", item.channel)
itemlist = []
else:
pagination = ''
patron = '<strong><a href="(?P<url>[^"]+)">(?P<title>.*?) [Ss][Uu][Bb]'
action = 'episodios'
data = httptools.downloadpage(item.url).data
return locals()
patron = r'<a href="([^"]+)"><img.*?src="([^"]+)".*?title="([^"]+)".*?/>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
if "Sub Ita Download & Streaming" in scrapedtitle or "Sub Ita Streaming":
if 'episodio' in scrapedtitle.lower():
itemlist.append(episode_item(item, scrapedtitle, scrapedurl, scrapedthumbnail))
else:
scrapedtitle, eptype = clean_title(scrapedtitle, simpleClean=True)
cleantitle, eptype = clean_title(scrapedtitle)
scrapedurl, total_eps = create_url(scrapedurl, cleantitle)
itemlist.append(
Item(channel=item.channel,
action="episodios",
text_color="azure",
contentType="tvshow",
title=scrapedtitle,
url=scrapedurl,
fulltitle=cleantitle,
show=cleantitle,
thumbnail=scrapedthumbnail))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
# Next Page
next_page = scrapertools.find_single_match(data, r'<link rel="next" href="([^"]+)"[^/]+/>')
if next_page != "":
itemlist.append(
Item(channel=item.channel,
action="search_anime",
text_bold=True,
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
url=next_page,
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png"))
return itemlist
@support.scrape
def episodios(item):
anime = True
patron = r'<td style[^>]+>\s*.*?(?:<span[^>]+)?<strong>(?P<title>[^<]+)<\/strong>.*?<td style[^>]+>\s*<a href="(?P<url>[^"]+)"[^>]+>'
def itemHook(item):
item.url = item.url.replace(host, '')
return item
action = 'findvideos'
return locals()
# =================================================================
# -----------------------------------------------------------------
def animeaggiornati(item):
log("animeaggiornati", "animeaggiornati", item.channel)
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
patron = r'<img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+><a href="([^"]+)">([^<]+)</a>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
if 'Streaming' in scrapedtitle:
cleantitle, eptype = clean_title(scrapedtitle)
# Creazione URL
scrapedurl, total_eps = create_url(scrapedurl, scrapedtitle)
itemlist.append(
Item(channel=item.channel,
action="episodios",
text_color="azure",
contentType="tvshow",
title=cleantitle,
url=scrapedurl,
fulltitle=cleantitle,
show=cleantitle,
thumbnail=scrapedthumbnail))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
def findvideos(item):
support.log(item)
# =================================================================
# -----------------------------------------------------------------
def ultimiep(item):
log("ultimiep", "ultimiep", item.channel)
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
patron = r'<img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+><a href="([^"]+)">([^<]+)</a>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
if 'Streaming' in scrapedtitle:
itemlist.append(episode_item(item, scrapedtitle, scrapedurl, scrapedthumbnail))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
# =================================================================
# -----------------------------------------------------------------
def lista_anime(item):
log("lista_anime", "lista_anime", item.channel)
itemlist = []
if item.number:
item.url = support.match(item, r'<a href="([^"]+)"[^>]*>', patronBlock=r'Episodio %s(.*?)</tr>' % item.number)[0][0]
if 'http' not in item.url:
if '//' in item.url[:2]:
item.url = 'http:' + item.url
elif host not in item.url:
item.url = host + item.url
if 'adf.ly' in item.url:
item.url = adfly.get_long_url(item.url)
elif 'bit.ly' in item.url:
item.url = support.httptools.downloadpage(item.url, only_headers=True, follow_redirects=False).headers.get("location")
matches = support.match(item, r'button"><a href="([^"]+)"')[0]
p = 1
if '{}' in item.url:
item.url, p = item.url.split('{}')
p = int(p)
# Carica la pagina
data = httptools.downloadpage(item.url).data
# Estrae i contenuti
patron = r'<li>\s*<strong>\s*<a\s*href="([^"]+?)">([^<]+?)</a>\s*</strong>\s*</li>'
matches = re.compile(patron, re.DOTALL).findall(data)
scrapedplot = ""
scrapedthumbnail = ""
for i, (scrapedurl, scrapedtitle) in enumerate(matches):
if (p - 1) * PERPAGE > i: continue
if i >= p * PERPAGE: break
# Pulizia titolo
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle).strip()
cleantitle, eptype = clean_title(scrapedtitle, simpleClean=True)
for video in matches:
itemlist.append(
support.Item(channel=item.channel,
action="play",
title='diretto',
url=video,
server='directo'))
Item(channel=item.channel,
extra=item.extra,
action="episodios",
text_color="azure",
contentType="tvshow",
title=cleantitle,
url=scrapedurl,
thumbnail=scrapedthumbnail,
fulltitle=cleantitle,
show=cleantitle,
plot=scrapedplot,
folder=True))
support.server(item, itemlist=itemlist)
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
if len(matches) >= p * PERPAGE:
scrapedurl = item.url + '{}' + str(p + 1)
itemlist.append(
Item(channel=item.channel,
extra=item.extra,
action="lista_anime",
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
url=scrapedurl,
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png",
folder=True))
return itemlist
# =================================================================
# -----------------------------------------------------------------
def episodios(item):
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<td style="[^"]*?">\s*.*?<strong>(.*?)</strong>.*?\s*</td>\s*<td style="[^"]*?">\s*<a href="([^"]+?)"[^>]+>\s*<img.*?src="([^"]+?)".*?/>\s*</a>\s*</td>'
matches = re.compile(patron, re.DOTALL).findall(data)
vvvvid_videos = False
for scrapedtitle, scrapedurl, scrapedimg in matches:
if 'nodownload' in scrapedimg or 'nostreaming' in scrapedimg:
continue
if 'vvvvid' in scrapedurl.lower():
if not vvvvid_videos: vvvvid_videos = True
itemlist.append(Item(title='I Video VVVVID Non sono supportati', text_color="red"))
continue
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
scrapedtitle = re.sub(r'<[^>]*?>', '', scrapedtitle)
scrapedtitle = '[COLOR azure][B]' + scrapedtitle + '[/B][/COLOR]'
itemlist.append(
Item(channel=item.channel,
action="findvideos",
contentType="episode",
title=scrapedtitle,
url=urlparse.urljoin(host, scrapedurl),
fulltitle=scrapedtitle,
show=scrapedtitle,
plot=item.plot,
fanart=item.fanart,
thumbnail=item.thumbnail))
# Comandi di servizio
if config.get_videolibrary_support() and len(itemlist) != 0 and not vvvvid_videos:
itemlist.append(
Item(channel=item.channel,
title=config.get_localized_string(30161),
text_color="yellow",
text_bold=True,
url=item.url,
action="add_serie_to_library",
extra="episodios",
show=item.show))
return itemlist
# ==================================================================
# -----------------------------------------------------------------
def findvideos(item):
logger.info("kod.animeforce findvideos")
itemlist = []
if item.extra:
data = httptools.downloadpage(item.url, headers=headers).data
blocco = scrapertools.find_single_match(data, r'%s(.*?)</tr>' % item.extra)
url = scrapertools.find_single_match(blocco, r'<a href="([^"]+)"[^>]*>')
if 'vvvvid' in url.lower():
itemlist = [Item(title='I Video VVVVID Non sono supportati', text_color="red")]
return itemlist
if 'http' not in url: url = "".join(['https:', url])
else:
url = item.url
if 'adf.ly' in url:
url = adfly.get_long_url(url)
elif 'bit.ly' in url:
url = httptools.downloadpage(url, only_headers=True, follow_redirects=False).headers.get("location")
if 'animeforce' in url:
headers.append(['Referer', item.url])
data = httptools.downloadpage(url, headers=headers).data
itemlist.extend(servertools.find_video_items(data=data))
for videoitem in itemlist:
videoitem.title = item.title + videoitem.title
videoitem.fulltitle = item.fulltitle
videoitem.show = item.show
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
videoitem.contentType = item.contentType
url = url.split('&')[0]
data = httptools.downloadpage(url, headers=headers).data
patron = """<source\s*src=(?:"|')([^"']+?)(?:"|')\s*type=(?:"|')video/mp4(?:"|')>"""
matches = re.compile(patron, re.DOTALL).findall(data)
headers.append(['Referer', url])
for video in matches:
itemlist.append(Item(channel=item.channel, action="play", title=item.title,
url=video + '|' + urllib.urlencode(dict(headers)), folder=False))
else:
itemlist.extend(servertools.find_video_items(data=url))
for videoitem in itemlist:
videoitem.title = item.title + videoitem.title
videoitem.fulltitle = item.fulltitle
videoitem.show = item.show
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
videoitem.contentType = item.contentType
return itemlist
# ==================================================================
# =================================================================
# Funzioni di servizio
# -----------------------------------------------------------------
def scrapedAll(url="", patron=""):
data = httptools.downloadpage(url).data
MyPatron = patron
matches = re.compile(MyPatron, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
return matches
# =================================================================
# -----------------------------------------------------------------
def create_url(url, title, eptype=""):
logger.info()
if 'download' not in url:
url = url.replace('-streaming', '-download-streaming')
total_eps = ""
if not eptype:
url = re.sub(r'episodio?-?\d+-?(?:\d+-|)[oav]*', '', url)
else: # Solo se è un episodio passa
total_eps = scrapertools.find_single_match(title.lower(), r'\((\d+)-(?:episodio|sub-ita)\)') # Questo numero verrà rimosso dall'url
if total_eps: url = url.replace('%s-' % total_eps, '')
url = re.sub(r'%s-?\d*-' % eptype.lower(), '', url)
url = url.replace('-fine', '')
return url, total_eps
# =================================================================
# -----------------------------------------------------------------
def clean_title(title, simpleClean=False):
logger.info()
title = title.replace("Streaming", "").replace("&", "")
title = title.replace("Download", "")
title = title.replace("Sub Ita", "")
cleantitle = title.replace("#038;", "").replace("amp;", "").strip()
if '(Fine)' in title:
cleantitle = cleantitle.replace('(Fine)', '').strip() + " (Fine)"
eptype = ""
if not simpleClean:
if "episodio" in title.lower():
eptype = scrapertools.find_single_match(title, "((?:Episodio?|OAV))")
cleantitle = re.sub(r'%s\s*\d*\s*(?:\(\d+\)|)' % eptype, '', title).strip()
if 'episodio' not in eptype.lower():
cleantitle = re.sub(r'Episodio?\s*\d+\s*(?:\(\d+\)|)\s*[\(OAV\)]*', '', cleantitle).strip()
if '(Fine)' in title:
cleantitle = cleantitle.replace('(Fine)', '')
return cleantitle, eptype
# =================================================================
# -----------------------------------------------------------------
def episode_item(item, scrapedtitle, scrapedurl, scrapedthumbnail):
scrapedtitle, eptype = clean_title(scrapedtitle, simpleClean=True)
cleantitle, eptype = clean_title(scrapedtitle)
# Creazione URL
scrapedurl, total_eps = create_url(scrapedurl, scrapedtitle, eptype)
epnumber = ""
if 'episodio' in eptype.lower():
epnumber = scrapertools.find_single_match(scrapedtitle.lower(), r'episodio?\s*(\d+)')
eptype += ":? %s%s" % (epnumber, (r" \(%s\):?" % total_eps) if total_eps else "")
extra = "<tr>\s*<td[^>]+><strong>(?:[^>]+>|)%s(?:[^>]+>[^>]+>|[^<]*|[^>]+>)</strong>" % eptype
item = Item(channel=item.channel,
action="findvideos",
contentType="tvshow",
title=scrapedtitle,
text_color="azure",
url=scrapedurl,
fulltitle=cleantitle,
extra=extra,
show=cleantitle,
thumbnail=scrapedthumbnail)
return item
# =================================================================
# -----------------------------------------------------------------
def scrapedSingle(url="", single="", patron=""):
data = httptools.downloadpage(url).data
paginazione = scrapertools.find_single_match(data, single)
matches = re.compile(patron, re.DOTALL).findall(paginazione)
scrapertools.printMatches(matches)
return matches
# =================================================================
# -----------------------------------------------------------------
def Crea_Url(pagina="1", azione="ricerca", categoria="", nome=""):
# esempio
# chiamate.php?azione=ricerca&cat=&nome=&pag=
Stringa = host + "/chiamate.php?azione=" + azione + "&cat=" + categoria + "&nome=" + nome + "&pag=" + pagina
log("crea_Url", Stringa)
return Stringa
# =================================================================
# -----------------------------------------------------------------
def log(funzione="", stringa="", canale=""):
logger.debug("[" + canale + "].[" + funzione + "] " + stringa)
# =================================================================
# =================================================================
# riferimenti di servizio
# -----------------------------------------------------------------
AnimeThumbnail = "http://img15.deviantart.net/f81c/i/2011/173/7/6/cursed_candies_anime_poster_by_careko-d3jnzg9.jpg"
AnimeFanart = "https://i.ytimg.com/vi/IAlbvyBdYdY/maxresdefault.jpg"
CategoriaThumbnail = "http://static.europosters.cz/image/750/poster/street-fighter-anime-i4817.jpg"
CategoriaFanart = "https://i.ytimg.com/vi/IAlbvyBdYdY/maxresdefault.jpg"
CercaThumbnail = "http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search"
CercaFanart = "https://i.ytimg.com/vi/IAlbvyBdYdY/maxresdefault.jpg"
AvantiTxt = config.get_localized_string(30992)
AvantiImg = "http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png"

View File

@@ -7,5 +7,72 @@
"thumbnail": "animepertutti.png",
"bannermenu": "animepertutti.png",
"categories": ["anime"],
"settings": []
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Includi ricerca globale",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "include_in_newest_anime",
"type": "bool",
"label": "Includi in Novità - Anime",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_italiano",
"type": "bool",
"label": "Includi in Novità - Italiano",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "checklinks",
"type": "bool",
"label": "Verifica se i link esistono",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "checklinks_number",
"type": "list",
"label": "Numero de link da verificare",
"default": 1,
"enabled": true,
"visible": "eq(-1,true)",
"lvalues": [ "1", "3", "5", "10" ]
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostra link in lingua...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": ["Non filtrare", "IT"]
},
{
"id": "autorenumber",
"type": "bool",
"label": "@70712",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "autorenumber_mode",
"type": "bool",
"label": "@70688",
"default": false,
"enabled": true,
"visible": "eq(-1,true)"
}
]
}

View File

@@ -3,99 +3,184 @@
# Canale per animeleggendari
# ------------------------------------------------------------
from core import support
import re
from core import servertools, httptools, scrapertoolsV2, tmdb, support
from core.item import Item
from core.support import log, menu
from lib.js2py.host import jsfunctions
from platformcode import logger, config
from specials import autoplay, autorenumber
__channel__ = "animeleggendari"
host = support.config.get_channel_url(__channel__)
host = config.get_channel_url(__channel__)
headers = [['User-Agent', 'Mozilla/50.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
['Referer', host]]
list_servers = ['verystream','openload','rapidvideo','streamango']
# Richiesto per Autoplay
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
list_servers = ['verystream', 'openload', 'streamango']
list_quality = ['default']
@support.menu
checklinks = config.get_setting('checklinks', 'animeleggendari')
checklinks_number = config.get_setting('checklinks_number', 'animeleggendari')
def mainlist(item):
log()
anime = [
('Leggendari', ['/category/anime-leggendari/', 'peliculas']),
('ITA', ['/category/anime-ita/', 'peliculas']),
('SUB-ITA', ['/category/anime-sub-ita/', 'peliculas']),
('Conclusi', ['/category/serie-anime-concluse/', 'peliculas']),
('in Corso', ['/category/serie-anime-in-corso/', 'last_ep']),
('Genere', ['', 'genres'])
]
return locals()
itemlist = []
menu(itemlist, 'Anime Leggendari', 'peliculas', host + '/category/anime-leggendari/')
menu(itemlist, 'Anime ITA', 'peliculas', host + '/category/anime-ita/')
menu(itemlist, 'Anime SUB-ITA', 'peliculas', host + '/category/anime-sub-ita/')
menu(itemlist, 'Anime Conclusi', 'peliculas', host + '/category/serie-anime-concluse/')
menu(itemlist, 'Anime in Corso', 'peliculas', host + '/category/anime-in-corso/')
menu(itemlist, 'Genere', 'genres', host)
menu(itemlist, 'Cerca...', 'search')
autoplay.init(item.channel, list_servers, list_quality)
autoplay.show_option(item.channel, itemlist)
support.channel_config(item, itemlist)
return itemlist
def search(item, texto):
support.log(texto)
log(texto)
item.url = host + "/?s=" + texto
try:
return peliculas(item)
# Continua la ricerca in caso di errore
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("%s" % line)
logger.error("%s" % line)
return []
def last_ep(item):
log('ANIME PER TUTTI')
return support.scrape(item, '<a href="([^"]+)">([^<]+)<', ['url','title'],patron_block='<ul class="mh-tab-content-posts">(.*?)<\/ul>', action='findvideos')
def newest(categoria):
log('ANIME PER TUTTI')
log(categoria)
itemlist = []
item = Item()
try:
if categoria == "anime":
item.url = host
item.action = "last_ep"
itemlist = last_ep(item)
if itemlist[-1].action == "last_ep":
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
@support.scrape
def genres(item):
blacklist = ['Contattaci','Privacy Policy', 'DMCA']
patronMenu = r'<a href="(?P<url>[^"]+)">(?P<title>[^<]+)<'
patronBlock = r'Generi</a>\s*<ul[^>]+>(?P<block>.*?)<\/ul>'
action = 'peliculas'
return locals()
itemlist = support.scrape(item, '<a href="([^"]+)">([^<]+)<', ['url', 'title'], action='peliculas', patron_block=r'Generi.*?<ul.*?>(.*?)<\/ul>', blacklist=['Contattaci','Privacy Policy', 'DMCA'])
return support.thumb(itemlist)
def peliculas(item):
log()
itemlist = []
@support.scrape
def peliculas(item):
anime = True
blacklist = ['top 10 anime da vedere']
if item.url != host: patronBlock = r'<div id="main-content(?P<block>.*?)<aside'
patron = r'<figure class="(?:mh-carousel-thumb|mh-posts-grid-thumb)"> <a class="[^"]+" href="(?P<url>[^"]+)" title="(?P<title>.*?)(?: \((?P<year>\d+)\))? (?:(?P<lang>SUB ITA|ITA))(?: (?P<title2>[Mm][Oo][Vv][Ii][Ee]))?[^"]*"><img[^s]+src="(?P<thumb>[^"]+)"[^>]+'
def itemHook(item):
if 'movie' in item.title.lower():
item.title = support.re.sub(' - [Mm][Oo][Vv][Ii][Ee]|[Mm][Oo][Vv][Ii][Ee]','',item.title)
item.title += support.typo('Movie','_ () bold')
item.contentType = 'movie'
item.action = 'findvideos'
return item
patronNext = r'<a class="next page-numbers" href="([^"]+)">'
action = 'episodios'
return locals()
matches, data = support.match(item, r'<a class="[^"]+" href="([^"]+)" title="([^"]+)"><img[^s]+src="([^"]+)"[^>]+')
for url, title, thumb in matches:
title = scrapertoolsV2.decodeHtmlentities(title.strip()).replace("streaming", "")
lang = scrapertoolsV2.find_single_match(title, r"((?:SUB ITA|ITA))")
videoType = ''
if 'movie' in title.lower():
videoType = ' - (MOVIE)'
if 'ova' in title.lower():
videoType = ' - (OAV)'
cleantitle = title.replace(lang, "").replace('(Streaming & Download)', '').replace('( Streaming & Download )', '').replace('OAV', '').replace('OVA', '').replace('MOVIE', '').strip()
if not videoType :
contentType="tvshow"
action="episodios"
else:
contentType="movie"
action="findvideos"
if not title.lower() in blacklist:
itemlist.append(
Item(channel=item.channel,
action=action,
contentType=contentType,
title=support.typo(cleantitle + videoType, 'bold') + support.typo(lang,'_ [] color kod'),
fulltitle=cleantitle,
show=cleantitle,
url=url,
thumbnail=thumb))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
autorenumber.renumber(itemlist)
support.nextPage(itemlist, item, data, r'<a class="next page-numbers" href="([^"]+)">')
return itemlist
@support.scrape
def episodios(item):
url = item.url
anime = True
patronBlock = r'(?:<p style="text-align: left;">|<div class="pagination clearfix">\s*)(?P<block>.*?)</span></a></div>'
patron = r'(?:<a href="(?P<url>[^"]+)"[^>]+>)?<span class="pagelink">(?P<episode>\d+)</span>'
def itemHook(item):
if not item.url:
item.url = url
item.title = support.typo('Episodio ', 'bold') + item.title
return item
return locals()
log()
itemlist = []
data = httptools.downloadpage(item.url).data
block = scrapertoolsV2.find_single_match(data, r'(?:<p style="text-align: left;">|<div class="pagination clearfix">\s*)(.*?)</span></a></div>')
itemlist.append(
Item(channel=item.channel,
action='findvideos',
contentType='episode',
title=support.typo('Episodio 1 bold'),
fulltitle=item.title,
url=item.url,
thumbnail=item.thumbnail))
if block:
matches = re.compile(r'<a href="([^"]+)".*?><span class="pagelink">(\d+)</span></a>', re.DOTALL).findall(data)
for url, number in matches:
itemlist.append(
Item(channel=item.channel,
action='findvideos',
contentType='episode',
title=support.typo('Episodio ' + number,'bold'),
fulltitle=item.title,
url=url,
thumbnail=item.thumbnail))
autorenumber.renumber(itemlist, item)
support.videolibrary
return itemlist
def findvideos(item):
support.log()
log()
data = ''
matches = support.match(item, 'str="([^"]+)"')[0]
if matches:
for match in matches:
data += str(jsfunctions.unescape(support.re.sub('@|g','%', match)))
data += str(jsfunctions.unescape(re.sub('@|g','%', match)))
data += str(match)
log('DATA',data)
if 'animepertutti' in data:
log('ANIMEPERTUTTI!')
else:
data = ''
return support.server(item,data)
itemlist = support.server(item,data)
if checklinks:
itemlist = servertools.check_list_links(itemlist, checklinks_number)
# itemlist = filtertools.get_links(itemlist, item, list_language)
autoplay.start(itemlist, item)
return itemlist

View File

@@ -8,14 +8,6 @@
"banner": "animesaturn.png",
"categories": ["anime"],
"settings": [
{
"id": "modo_grafico",
"type": "bool",
"label": "Cerca informazioni extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "channel_host",
"type": "text",
@@ -65,6 +57,15 @@
"visible": "eq(-1,true)",
"lvalues": [ "1", "3", "5", "10" ]
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostra link in lingua...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": ["Non filtrare","IT"]
},
{
"id": "autorenumber",
"type": "bool",

View File

@@ -3,117 +3,377 @@
# Canale per AnimeSaturn
# Thanks to 4l3x87
# ----------------------------------------------------------
import re
from core import support
import urlparse
import channelselector
from core import httptools, tmdb, support, scrapertools, jsontools
from core.item import Item
from core.support import log
from platformcode import logger, config
from specials import autoplay, autorenumber
__channel__ = "animesaturn"
host = support.config.get_setting("channel_host", __channel__)
headers={'X-Requested-With': 'XMLHttpRequest'}
host = config.get_setting("channel_host", __channel__)
headers = [['Referer', host]]
IDIOMAS = {'Italiano': 'ITA'}
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
list_servers = ['openload', 'fembed', 'animeworld']
list_quality = ['default', '480p', '720p', '1080p']
@support.menu
def mainlist(item):
anime = ['/animelist?load_all=1',
('Più Votati',['/toplist','menu', 'top']),
('In Corso',['/animeincorso','peliculas','incorso']),
('Ultimi Episodi',['/fetch_pages.php?request=episodes','peliculas','updated'])]
return locals()
@support.scrape
def search(item, texto):
search = texto
item.contentType = 'tvshow'
patron = r'href="(?P<url>[^"]+)"[^>]+>[^>]+>(?P<title>[^<|(]+)(?:(?P<lang>\(([^\)]+)\)))?<|\)'
action = 'check'
return locals()
def newest(categoria):
support.log()
log()
itemlist = []
item = support.Item()
support.menu(itemlist, 'Novità bold', 'ultimiep', "%s/fetch_pages.php?request=episodes" % host, 'tvshow')
support.menu(itemlist, 'Anime bold', 'lista_anime', "%s/animelist?load_all=1" % host)
support.menu(itemlist, 'Archivio A-Z submenu', 'list_az', '%s/animelist?load_all=1' % host, args=['tvshow', 'alfabetico'])
support.menu(itemlist, 'Cerca', 'search', host)
support.aplay(item, itemlist, list_servers, list_quality)
support.channel_config(item, itemlist)
return itemlist
# ----------------------------------------------------------------------------------------------------------------
def cleantitle(scrapedtitle):
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle.strip())
scrapedtitle = scrapedtitle.replace('[HD]', '').replace('', '\'').replace('×', 'x').replace('"', "'")
year = scrapertools.find_single_match(scrapedtitle, '\((\d{4})\)')
if year:
scrapedtitle = scrapedtitle.replace('(' + year + ')', '')
return scrapedtitle.strip()
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def lista_anime(item):
log()
itemlist = []
PERPAGE = 15
p = 1
if '{}' in item.url:
item.url, p = item.url.split('{}')
p = int(p)
if '||' in item.url:
series = item.url.split('\n\n')
matches = []
for i, serie in enumerate(series):
matches.append(serie.split('||'))
else:
# Estrae i contenuti
patron = r'<a href="([^"]+)"[^>]*?>[^>]*?>(.+?)<'
matches = support.match(item, patron, headers=headers)[0]
scrapedplot = ""
scrapedthumbnail = ""
for i, (scrapedurl, scrapedtitle) in enumerate(matches):
if (p - 1) * PERPAGE > i: continue
if i >= p * PERPAGE: break
title = cleantitle(scrapedtitle).replace('(ita)', '(ITA)')
movie = False
showtitle = title
if '(ITA)' in title:
title = title.replace('(ITA)', '').strip()
showtitle = title
else:
title += ' ' + support.typo('Sub-ITA', '_ [] color kod')
infoLabels = {}
if 'Akira' in title:
movie = True
infoLabels['year'] = 1988
if 'Dragon Ball Super Movie' in title:
movie = True
infoLabels['year'] = 2019
itemlist.append(
Item(channel=item.channel,
extra=item.extra,
action="episodios" if movie == False else 'findvideos',
title=title,
url=scrapedurl,
thumbnail=scrapedthumbnail,
fulltitle=showtitle,
show=showtitle,
contentTitle=showtitle,
plot=scrapedplot,
contentType='episode' if movie == False else 'movie',
originalUrl=scrapedurl,
infoLabels=infoLabels,
folder=True))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
autorenumber.renumber(itemlist)
# Paginazione
if len(matches) >= p * PERPAGE:
support.nextPage(itemlist, item, next_page=(item.url + '{}' + str(p + 1)))
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def episodios(item):
log()
itemlist = []
data = httptools.downloadpage(item.url, headers=headers, ignore_response_code=True).data
anime_id = scrapertools.find_single_match(data, r'\?anime_id=(\d+)')
# movie or series
movie = scrapertools.find_single_match(data, r'\Episodi:</b>\s(\d*)\sMovie')
data = httptools.downloadpage(
host + "/loading_anime?anime_id=" + anime_id,
headers={
'X-Requested-With': 'XMLHttpRequest'
}).data
patron = r'<td style="[^"]+"><b><strong" style="[^"]+">(.+?)</b></strong></td>\s*'
patron += r'<td style="[^"]+"><a href="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedtitle, scrapedurl in matches:
scrapedtitle = cleantitle(scrapedtitle)
scrapedtitle = re.sub(r'<[^>]*?>', '', scrapedtitle)
scrapedtitle = '[B]' + scrapedtitle + '[/B]'
itemlist.append(
Item(
channel=item.channel,
action="findvideos",
contentType="episode",
title=scrapedtitle,
url=urlparse.urljoin(host, scrapedurl),
fulltitle=scrapedtitle,
show=scrapedtitle,
plot=item.plot,
fanart=item.thumbnail,
thumbnail=item.thumbnail))
if ((len(itemlist) == 1 and 'Movie' in itemlist[0].title) or movie) and item.contentType != 'movie':
item.url = itemlist[0].url
item.contentType = 'movie'
return findvideos(item)
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
autorenumber.renumber(itemlist, item)
support.videolibrary(itemlist, item, 'bold color kod')
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def findvideos(item):
log()
originalItem = item
if item.contentType == 'movie':
episodes = episodios(item)
if len(episodes) > 0:
item.url = episodes[0].url
itemlist = []
data = httptools.downloadpage(item.url, headers=headers, ignore_response_code=True).data
data = re.sub(r'\n|\t|\s+', ' ', data)
patron = r'<a href="([^"]+)"><div class="downloadestreaming">'
url = scrapertools.find_single_match(data, patron)
data = httptools.downloadpage(url, headers=headers, ignore_response_code=True).data
data = re.sub(r'\n|\t|\s+', ' ', data)
itemlist = support.server(item, data=data)
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def ultimiep(item):
log()
itemlist = []
p = 1
if '{}' in item.url:
item.url, p = item.url.split('{}')
p = int(p)
post = "page=%s" % p if p > 1 else None
data = httptools.downloadpage(
item.url, post=post, headers={
'X-Requested-With': 'XMLHttpRequest'
}).data
patron = r"""<a href='[^']+'><div class="locandina"><img alt="[^"]+" src="([^"]+)" title="[^"]+" class="grandezza"></div></a>\s*"""
patron += r"""<a href='([^']+)'><div class="testo">(.+?)</div></a>\s*"""
patron += r"""<a href='[^']+'><div class="testo2">(.+?)</div></a>"""
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedthumbnail, scrapedurl, scrapedtitle1, scrapedtitle2 in matches:
scrapedtitle1 = cleantitle(scrapedtitle1)
scrapedtitle2 = cleantitle(scrapedtitle2)
scrapedtitle = scrapedtitle1 + ' - ' + scrapedtitle2 + ''
title = scrapedtitle
showtitle = scrapedtitle
if '(ITA)' in title:
title = title.replace('(ITA)', '').strip()
showtitle = title
else:
title += ' ' + support.typo('Sub-ITA', '_ [] color kod')
itemlist.append(
Item(channel=item.channel,
contentType="episode",
action="findvideos",
title=title,
url=scrapedurl,
fulltitle=scrapedtitle1,
show=showtitle,
thumbnail=scrapedthumbnail))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
# Pagine
patronvideos = r'data-page="(\d+)" title="Next">Pagina Successiva'
next_page = scrapertools.find_single_match(data, patronvideos)
if next_page:
support.nextPage(itemlist, item, next_page=(item.url + '{}' + next_page))
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def newest(categoria):
log(categoria)
itemlist = []
item = Item()
item.url = host
item.extra = ''
try:
if categoria == "anime":
item.url = host + '/fetch_pages.php?request=episodes'
item.args = "updated"
return peliculas(item)
# Continua la ricerca in caso di errore
item.url = "%s/fetch_pages?request=episodios" % host
item.action = "ultimiep"
itemlist = ultimiep(item)
if itemlist[-1].action == "ultimiep":
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("{0}".format(line))
logger.error("{0}".format(line))
return []
return itemlist
@support.scrape
def menu(item):
patronMenu = r'u>(?P<title>[^<]+)<u>(?P<url>.*?)</div> </div>'
action = 'peliculas'
return locals()
# ================================================================================================================
@support.scrape
def peliculas(item):
anime = True
if item.args == 'updated':
post = "page=" + str(item.page if item.page else 1) if item.page > 1 else None
page, data = support.match(item, r'data-page="(\d+)" title="Next">', post=post, headers=headers)
patron = r'<img alt="[^"]+" src="(?P<thumb>[^"]+)" [^>]+></div></a>\s*<a href="(?P<url>[^"]+)"><div class="testo">(?P<title>[^\(<]+)(?:(?P<lang>\(([^\)]+)\)))?</div></a>\s*<a href="[^"]+"><div class="testo2">[^\d]+(?P<episode>\d+)</div></a>'
if page: nextpage = page
action = 'findvideos'
elif item.args == 'top':
data = item.url
patron = r'<a href="(?P<url>[^"]+)">[^>]+>(?P<title>[^<\(]+)(?:\((?P<year>[^\)]+)\))?</div></a><div class="numero">(?P<title2>[^<]+)</div>.*?src="(?P<thumb>[^"]+)"'
action = 'check'
else:
pagination = ''
if item.args == 'incorso': patron = r'"slider_title" href="(?P<url>[^"]+)"><img src="(?P<thumb>[^"]+)"[^>]+>(?P<title>[^\(<]+)(?:\((?P<year>\d+)\))?</a>'
else: patron = r'href="(?P<url>[^"]+)"[^>]+>[^>]+>(?P<title>[^<|(]+)(?:(?P<lang>\(([^\)]+)\)))?<|\)'
action = 'check'
return locals()
def check(item):
movie, data = support.match(item, r'Episodi:</b> (\d*) Movie')
anime_id = support.match(data, r'anime_id=(\d+)')[0][0]
item.url = host + "/loading_anime?anime_id=" + anime_id
support.log('MOVIE= ', movie)
if movie:
item.contentType = 'movie'
episodes = episodios(item)
if len(episodes) > 0: item.url = episodes[0].url
return findvideos(item)
else:
return episodios(item)
@support.scrape
def episodios(item):
if item.contentType != 'movie': anime = True
patron = r'<strong" style="[^"]+">(?P<title>[^<]+)</b></strong></td>\s*<td style="[^"]+"><a href="(?P<url>[^"]+)"'
return locals()
def findvideos(item):
support.log(item)
# ----------------------------------------------------------------------------------------------------------------
def search_anime(item, texto):
log(texto)
itemlist = []
url = support.match(item, r'<a href="([^"]+)"><div class="downloadestreaming">',headers=headers)[0]
if url: item.url = url[0]
return support.server(item)
data = httptools.downloadpage(host + "/index.php?search=1&key=%s" % texto).data
jsondata = jsontools.load(data)
for title in jsondata:
data = str(httptools.downloadpage("%s/templates/header?check=1" % host, post="typeahead=%s" % title).data)
if 'Anime non esistente' in data:
continue
else:
title = title.replace('(ita)', '(ITA)')
showtitle = title
if '(ITA)' in title:
title = title.replace('(ITA)', '').strip()
showtitle = title
else:
title += ' ' + support.typo('Sub-ITA', '_ [] color kod')
url = "%s/anime/%s" % (host, data)
itemlist.append(
Item(
channel=item.channel,
contentType="episode",
action="episodios",
title=title,
url=url,
fulltitle=title,
show=showtitle,
thumbnail=""))
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def search(item, texto):
log(texto)
itemlist = []
try:
return search_anime(item, texto)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def list_az(item):
log()
itemlist = []
alphabet = dict()
# Articoli
patron = r'<a href="([^"]+)"[^>]*?>[^>]*?>(.+?)<'
matches = support.match(item, patron, headers=headers)[0]
for i, (scrapedurl, scrapedtitle) in enumerate(matches):
letter = scrapedtitle[0].upper()
if letter not in alphabet:
alphabet[letter] = []
alphabet[letter].append(scrapedurl + '||' + scrapedtitle)
for letter in sorted(alphabet):
itemlist.append(
Item(channel=item.channel,
action="lista_anime",
url='\n\n'.join(alphabet[letter]),
title=letter,
fulltitle=letter))
return itemlist
# ================================================================================================================

View File

@@ -31,38 +31,6 @@
"default": true,
"enabled": true,
"visible": true
}, {
"id": "checklinks",
"type": "bool",
"label": "Verifica se i link esistono",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "checklinks_number",
"type": "list",
"label": "Numero de link da verificare",
"default": 1,
"enabled": true,
"visible": "eq(-1,true)",
"lvalues": [ "1", "3", "5", "10" ]
},
{
"id": "autorenumber",
"type": "bool",
"label": "@70712",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "autorenumber_mode",
"type": "bool",
"label": "@70688",
"default": false,
"enabled": true,
"visible": "eq(-1,true)"
}
]
}

View File

@@ -1,37 +1,65 @@
# -*- coding: utf-8 -*-
# Ringraziamo Icarus crew
# ------------------------------------------------------------
# Ringraziamo Icarus crew
# Canale per AnimeSubIta
# ------------------------------------------------------------
import re
import urllib
import urlparse
from core import support
from core import httptools, scrapertools, tmdb, support
from core.item import Item
from platformcode import logger, config
__channel__ = "animesubita"
host = support.config.get_channel_url(__channel__)
headers = {'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0'}
host = config.get_channel_url(__channel__)
PERPAGE = 20
list_servers = ['directo']
list_quality = ['default']
@support.menu
# ----------------------------------------------------------------------------------------------------------------
def mainlist(item):
anime = ['/lista-anime/',
('Ultimi Episodi',['/category/ultimi-episodi/', 'peliculas', 'updated']),
('in Corso',['/category/anime-in-corso/', 'peliculas', 'alt']),
('Generi',['/generi/', 'genres', 'alt'])]
return locals()
logger.info()
itemlist = [Item(channel=item.channel,
action="lista_anime_completa",
title=support.color("Lista Anime", "azure"),
url="%s/lista-anime/" % host,
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
Item(channel=item.channel,
action="ultimiep",
title=support.color("Ultimi Episodi", "azure"),
url="%s/category/ultimi-episodi/" % host,
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
Item(channel=item.channel,
action="lista_anime",
title=support.color("Anime in corso", "azure"),
url="%s/category/anime-in-corso/" % host,
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
Item(channel=item.channel,
action="categorie",
title=support.color("Categorie", "azure"),
url="%s/generi/" % host,
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
Item(channel=item.channel,
action="search",
title=support.color("Cerca anime ...", "yellow"),
extra="anime",
thumbnail="http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search")
]
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def newest(categoria):
support.log(categoria)
logger.info()
itemlist = []
item = support.Item()
item = Item()
try:
if categoria == "anime":
item.url = host
item.args = "updated"
itemlist = peliculas(item)
item.action = "ultimiep"
itemlist = ultimiep(item)
if itemlist[-1].action == "ultimiep":
itemlist.pop()
@@ -39,92 +67,277 @@ def newest(categoria):
except:
import sys
for line in sys.exc_info():
support.logger.error("{0}".format(line))
logger.error("{0}".format(line))
return []
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def search(item, texto):
support.log(texto)
logger.info()
item.url = host + "/?s=" + texto
item.args = 'alt'
try:
return peliculas(item)
return lista_anime(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("%s" % line)
logger.error("%s" % line)
return []
@support.scrape
def genres(item):
blacklist= ['Anime In Corso','Ultimi Episodi']
patronMenu=r'<li><a title="[^"]+" href="(?P<url>[^"]+)">(?P<title>[^<]+)</a>'
action = 'peliculas'
return locals()
# ================================================================================================================
@support.scrape
def peliculas(item):
anime = True
if item.args == 'updated':
patron = r'<div class="post-thumbnail">\s*<a href="(?P<url>[^"]+)" title="(?P<title>.*?)\s*(?P<episode>Episodio \d+)[^"]+"[^>]*>\s*<img[^src]+src="(?P<thumb>[^"]+)"'
patronNext = r'<link rel="next" href="([^"]+)"\s*/>'
action = 'findvideos'
elif item.args == 'alt':
# debug = True
patron = r'<div class="post-thumbnail">\s*<a href="(?P<url>[^"]+)" title="(?P<title>.*?)(?: [Oo][Aa][Vv])?(?:\s*(?P<lang>[Ss][Uu][Bb].[Ii][Tt][Aa]))[^"]+">\s*<img[^src]+src="(?P<thumb>[^"]+)"'
patronNext = r'<link rel="next" href="([^"]+)"\s*/>'
action = 'episodios'
else:
pagination = ''
patronBlock = r'<ul class="lcp_catlist"[^>]+>(?P<block>.*?)</ul>'
patron = r'<a href="(?P<url>[^"]+)"[^>]+>(?P<title>.*?)(?: [Oo][Aa][Vv])?(?:\s*(?P<lang>[Ss][Uu][Bb].[Ii][Tt][Aa])[^<]+)?</a>'
action = 'episodios'
return locals()
@support.scrape
def episodios(item):
anime = True
patron = r'<td style="[^"]*?">\s*.*?<strong>(?P<episode>[^<]+)</strong>\s*</td>\s*<td[^>]+>\s*<a href="(?P<url>[^"]+)"[^>]+>\s*<img src="(?P<thumb>[^"]+?)"[^>]+>'
return locals()
def findvideos(item):
support.log(item)
# ----------------------------------------------------------------------------------------------------------------
def categorie(item):
logger.info()
itemlist = []
if item.args == 'updated':
ep = support.match(item.fulltitle,r'(Episodio\s*\d+)')[0][0]
item.url = support.re.sub(r'episodio-\d+-|oav-\d+-', '',item.url)
if 'streaming' not in item.url: item.url = item.url.replace('sub-ita','sub-ita-streaming')
item.url = support.match(item, r'<a href="([^"]+)"[^>]+>', ep + '(.*?)</tr>', )[0][0]
data = httptools.downloadpage(item.url).data
patron = r'<li><a title="[^"]+" href="([^"]+)">([^<]+)</a>'
matches = re.compile(patron, re.DOTALL).findall(data)
urls = support.match(item.url, r'(episodio\d*.php.*)')[0]
for url in urls:
url = host + '/' + url
headers['Referer'] = url
data = support.match(item, headers=headers, url=url)[1]
cookies = ""
matches = support.re.compile('(.%s.*?)\n' % host.replace("http://", "").replace("www.", ""), support.re.DOTALL).findall(support.config.get_cookie_data())
for cookie in matches:
cookies += cookie.split('\t')[5] + "=" + cookie.split('\t')[6] + ";"
headers['Cookie'] = cookies[:-1]
url = support.match(data, r'<source src="([^"]+)"[^>]+>')[0][0] + '|' + support.urllib.urlencode(headers)
for scrapedurl, scrapedtitle in matches:
itemlist.append(
support.Item(channel=item.channel,
action="play",
title='diretto',
quality='',
url=url,
server='directo',
fulltitle=item.fulltitle,
show=item.show))
Item(channel=item.channel,
action="lista_anime",
title=scrapedtitle.replace('Anime', '').strip(),
text_color="azure",
url=scrapedurl,
thumbnail=item.thumbnail,
folder=True))
return support.server(item,url,itemlist)
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def ultimiep(item):
logger.info("ultimiep")
itemlist = lista_anime(item, False, False)
for itm in itemlist:
title = scrapertools.decodeHtmlentities(itm.title)
# Pulizia titolo
title = title.replace("Streaming", "").replace("&", "")
title = title.replace("Download", "")
title = title.replace("Sub Ita", "").strip()
eptype = scrapertools.find_single_match(title, "((?:Episodio?|OAV))")
cleantitle = re.sub(r'%s\s*\d*\s*(?:\(\d+\)|)' % eptype, '', title).strip()
# Creazione URL
url = re.sub(r'%s-?\d*-' % eptype.lower(), '', itm.url)
if "-streaming" not in url:
url = url.replace("sub-ita", "sub-ita-streaming")
epnumber = ""
if 'episodio' in eptype.lower():
epnumber = scrapertools.find_single_match(title.lower(), r'episodio?\s*(\d+)')
eptype += ":? " + epnumber
extra = "<tr>\s*<td[^>]+><strong>(?:[^>]+>|)%s(?:[^>]+>[^>]+>|[^<]*|[^>]+>)</strong>" % eptype
itm.title = support.color(title, 'azure').strip()
itm.action = "findvideos"
itm.url = url
itm.fulltitle = cleantitle
itm.extra = extra
itm.show = re.sub(r'Episodio\s*', '', title)
itm.thumbnail = item.thumbnail
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def lista_anime(item, nextpage=True, show_lang=True):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
blocco = scrapertools.find_single_match(data, r'<div class="post-list group">(.*?)</nav><!--/.pagination-->')
# patron = r'<a href="([^"]+)" title="([^"]+)">\s*<img[^s]+src="([^"]+)"[^>]+>' # Patron con thumbnail, Kodi non scarica le immagini dal sito
patron = r'<a href="([^"]+)" title="([^"]+)">'
matches = re.compile(patron, re.DOTALL).findall(blocco)
for scrapedurl, scrapedtitle in matches:
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
scrapedtitle = re.sub(r'\s+', ' ', scrapedtitle)
# Pulizia titolo
scrapedtitle = scrapedtitle.replace("Streaming", "").replace("&", "")
scrapedtitle = scrapedtitle.replace("Download", "")
lang = scrapertools.find_single_match(scrapedtitle, r"([Ss][Uu][Bb]\s*[Ii][Tt][Aa])")
scrapedtitle = scrapedtitle.replace("Sub Ita", "").strip()
eptype = scrapertools.find_single_match(scrapedtitle, "((?:Episodio?|OAV))")
cleantitle = re.sub(r'%s\s*\d*\s*(?:\(\d+\)|)' % eptype, '', scrapedtitle)
cleantitle = cleantitle.replace(lang, "").strip()
itemlist.append(
Item(channel=item.channel,
action="episodi",
contentType="tvshow" if 'oav' not in scrapedtitle.lower() else "movie",
title=scrapedtitle.replace(lang, "(%s)" % support.color(lang, "red") if show_lang else "").strip(),
fulltitle=cleantitle,
url=scrapedurl,
show=cleantitle,
folder=True))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
if nextpage:
patronvideos = r'<link rel="next" href="([^"]+)"\s*/>'
matches = re.compile(patronvideos, re.DOTALL).findall(data)
if len(matches) > 0:
scrapedurl = matches[0]
itemlist.append(
Item(channel=item.channel,
action="lista_anime",
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
url=scrapedurl,
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png",
folder=True))
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def lista_anime_completa(item):
logger.info()
itemlist = []
p = 1
if '{}' in item.url:
item.url, p = item.url.split('{}')
p = int(p)
data = httptools.downloadpage(item.url).data
blocco = scrapertools.find_single_match(data, r'<ul class="lcp_catlist"[^>]+>(.*?)</ul>')
patron = r'<a href="([^"]+)"[^>]+>([^<]+)</a>'
matches = re.compile(patron, re.DOTALL).findall(blocco)
for i, (scrapedurl, scrapedtitle) in enumerate(matches):
if (p - 1) * PERPAGE > i: continue
if i >= p * PERPAGE: break
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle.strip())
cleantitle = scrapedtitle.replace("Sub Ita Streaming", "").replace("Ita Streaming", "")
itemlist.append(
Item(channel=item.channel,
action="episodi",
contentType="tvshow" if 'oav' not in scrapedtitle.lower() else "movie",
title=support.color(scrapedtitle, 'azure'),
fulltitle=cleantitle,
show=cleantitle,
url=scrapedurl,
folder=True))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
if len(matches) >= p * PERPAGE:
scrapedurl = item.url + '{}' + str(p + 1)
itemlist.append(
Item(channel=item.channel,
extra=item.extra,
action="lista_anime_completa",
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
url=scrapedurl,
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png",
folder=True))
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def episodi(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<td style="[^"]*?">\s*.*?<strong>(.*?)</strong>.*?\s*</td>\s*<td style="[^"]*?">\s*<a href="([^"]+?)"[^>]+>\s*<img.*?src="([^"]+?)".*?/>\s*</a>\s*</td>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedtitle, scrapedurl, scrapedimg in matches:
if 'nodownload' in scrapedimg or 'nostreaming' in scrapedimg:
continue
if 'vvvvid' in scrapedurl.lower():
itemlist.append(Item(title='I Video VVVVID Non sono supportati'))
continue
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
scrapedtitle = re.sub(r'<[^>]*?>', '', scrapedtitle)
scrapedtitle = '[COLOR azure][B]' + scrapedtitle + '[/B][/COLOR]'
itemlist.append(
Item(channel=item.channel,
action="findvideos",
contentType="episode",
title=scrapedtitle,
url=urlparse.urljoin(host, scrapedurl),
fulltitle=item.title,
show=scrapedtitle,
plot=item.plot,
fanart=item.thumbnail,
thumbnail=item.thumbnail))
# Comandi di servizio
if config.get_videolibrary_support() and len(itemlist) != 0:
itemlist.append(
Item(channel=item.channel,
title="[COLOR lightblue]%s[/COLOR]" % config.get_localized_string(30161),
url=item.url,
action="add_serie_to_library",
extra="episodios",
show=item.show))
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def findvideos(item):
logger.info()
itemlist = []
headers = {'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0'}
if item.extra:
data = httptools.downloadpage(item.url, headers=headers).data
blocco = scrapertools.find_single_match(data, r'%s(.*?)</tr>' % item.extra)
item.url = scrapertools.find_single_match(blocco, r'<a href="([^"]+)"[^>]+>')
patron = r'http:\/\/link[^a]+animesubita[^o]+org\/[^\/]+\/.*?(episodio\d*)[^p]+php(\?.*)'
for phpfile, scrapedurl in re.findall(patron, item.url, re.DOTALL):
url = "%s/%s.php%s" % (host, phpfile, scrapedurl)
headers['Referer'] = url
data = httptools.downloadpage(url, headers=headers).data
# ------------------------------------------------
cookies = ""
matches = re.compile('(.%s.*?)\n' % host.replace("http://", "").replace("www.", ""), re.DOTALL).findall(config.get_cookie_data())
for cookie in matches:
name = cookie.split('\t')[5]
value = cookie.split('\t')[6]
cookies += name + "=" + value + ";"
headers['Cookie'] = cookies[:-1]
# ------------------------------------------------
scrapedurl = scrapertools.find_single_match(data, r'<source src="([^"]+)"[^>]+>')
url = scrapedurl + '|' + urllib.urlencode(headers)
itemlist.append(
Item(channel=item.channel,
action="play",
text_color="azure",
title="[%s] %s" % (support.color("Diretto", "orange"), item.title),
fulltitle=item.fulltitle,
url=url,
thumbnail=item.thumbnail,
fanart=item.thumbnail,
plot=item.plot))
return itemlist

View File

@@ -23,6 +23,7 @@ list_servers = ['animeworld', 'verystream', 'streamango', 'openload', 'directo']
list_quality = ['default', '480p', '720p', '1080p']
def mainlist(item):
log()
@@ -55,19 +56,24 @@ def generi(item):
def build_menu(item):
log()
itemlist = []
support.menuItem(itemlist, __channel__, 'Tutti bold', 'peliculas', item.url, 'tvshow' , args=item.args)
matches = support.match(item,r'<button class="btn btn-sm btn-default dropdown-toggle" data-toggle="dropdown"> (.*?) <span.[^>]+>(.*?)</ul>',r'<form class="filters.*?>(.*?)</form>')[0]
support.menu(itemlist, 'Tutti bold submenu', 'video', item.url+item.args[1])
matches, data = support.match(item,r'<button class="btn btn-sm btn-default dropdown-toggle" data-toggle="dropdown"> (.*?) <span.*?>(.*?)<\/ul>',r'<form class="filters.*?>(.*?)<\/form>')
log('ANIME DATA =' ,data)
for title, html in matches:
if title not in 'Lingua Ordine':
support.menuItem(itemlist, __channel__, title + ' submenu bold', 'build_sub_menu', html, 'tvshow', args=item.args)
support.menu(itemlist, title + ' submenu bold', 'build_sub_menu', html, args=item.args)
log('ARGS= ', item.args[0])
log('ARGS= ', html)
return itemlist
# Crea SottoMenu Filtro ======================================================
def build_sub_menu(item):
log()
itemlist = []
matches = support.re.compile(r'<input.*?name="([^"]+)" value="([^"]+)"\s*>[^>]+>([^<]+)<\/label>', re.DOTALL).findall(item.url)
matches = re.compile(r'<input.*?name="([^"]+)" value="([^"]+)"\s*>[^>]+>([^<]+)<\/label>', re.DOTALL).findall(item.url)
for name, value, title in matches:
support.menuItem(itemlist, __channel__, support.typo(title, 'bold'), 'peliculas', host + '/filter?&' + name + '=' + value + '&' + item.args + '&sort=2', 'tvshow', args='sub')
support.menu(itemlist, support.typo(title, 'bold'), 'video', host + '/filter?' + '&' + name + '=' + value + '&' + item.args[1])
return itemlist
# Novità ======================================================
@@ -78,9 +84,12 @@ def newest(categoria):
item = Item()
try:
if categoria == "anime":
item.url = host + '/updated'
item.args = "updated"
return peliculas(item)
item.url = host + '/newest'
item.action = "video"
itemlist = video(item)
if itemlist[-1].action == "video":
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys
@@ -97,7 +106,7 @@ def search(item, texto):
log(texto)
item.url = host + '/search?keyword=' + texto
try:
return peliculas(item)
return video(item)
# Continua la ricerca in caso di errore
except:
import sys
@@ -105,6 +114,7 @@ def search(item, texto):
logger.error("%s" % line)
return []
# Lista A-Z ====================================================
def alfabetico(item):
@@ -238,10 +248,12 @@ def video(item):
context = autoplay.context,
number= number))
patronNext=r'href="([^"]+)" rel="next"'
type_content_dict={'movie':['movie']}
type_action_dict={'findvideos':['movie']}
return locals()
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
autorenumber.renumber(itemlist)
# Next page
support.nextPage(itemlist, item, data, r'href="([^"]+)" rel="next"', resub=['&amp;','&'])
return itemlist
def episodios(item):
@@ -277,18 +289,18 @@ def episodios(item):
def findvideos(item):
log(item)
itemlist = []
matches, data = support.match(item, r'class="tab.*?data-name="([0-9]+)">', headers=headers)
log()
itemlist = []
matches, data = support.match(item, r'class="tab.*?data-name="([0-9]+)">([^<]+)</span', headers=headers)
videoData = ''
for serverid in matches:
number = scrapertoolsV2.find_single_match(item.title,r'(\d+) -')
block = scrapertoolsV2.find_multiple_matches(data,'data-id="' + serverid + '">(.*?)<div class="server')
ID = scrapertoolsV2.find_single_match(str(block),r'<a data-id="([^"]+)" data-base="' + (number if number else '1') + '"')
log('ID= ',ID)
for serverid, servername in matches:
block = scrapertoolsV2.find_multiple_matches(data,'data-id="'+serverid+'">(.*?)<div class="server')
log('ITEM= ',item)
id = scrapertoolsV2.find_single_match(str(block),r'<a data-id="([^"]+)" data-base="'+item.number+'"')
if id:
dataJson = httptools.downloadpage('%s/ajax/episode/info?id=%s&server=%s&ts=%s' % (host, ID, serverid, int(time.time())), headers=[['x-requested-with', 'XMLHttpRequest']]).data
dataJson = httptools.downloadpage('%s/ajax/episode/info?id=%s&server=%s&ts=%s' % (host, id, serverid, int(time.time())), headers=[['x-requested-with', 'XMLHttpRequest']]).data
json = jsontools.load(dataJson)
videoData +='\n'+json['grabber']
@@ -301,7 +313,6 @@ def findvideos(item):
quality='',
url=json['grabber'],
server='directo',
fulltitle=item.fulltitle,
show=item.show,
contentType=item.contentType,
folder=False))

View File

@@ -1,14 +0,0 @@
{
"id": "beeg",
"name": "Beeg",
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "beeg.png",
"banner": "beeg.png",
"categories": [
"adult"
],
"settings": [
]
}

View File

@@ -1,122 +0,0 @@
# -*- coding: utf-8 -*-
import re
import urllib
from core import jsontools as json
from core import scrapertools
from core.item import Item
from platformcode import logger
from core import httptools
url_api = ""
Host = "https://beeg.com"
def get_api_url():
global url_api
data = httptools.downloadpage(Host).data
version = re.compile('var beeg_version = ([\d]+)').findall(data)[0]
url_api = Host + "/api/v6/" + version
get_api_url()
def mainlist(item):
logger.info()
get_api_url()
itemlist = []
itemlist.append(Item(channel=item.channel, action="videos", title="Útimos videos", url=url_api + "/index/main/0/pc",
viewmode="movie"))
itemlist.append(Item(channel=item.channel, action="canal", title="Canal",
url=url_api + "/channels"))
itemlist.append(Item(channel=item.channel, action="listcategorias", title="Categorias",
url=url_api + "/index/main/0/pc", extra="nonpopular"))
itemlist.append(
Item(channel=item.channel, action="search", title="Buscar"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = url_api + "/index/tag/0/pc?tag=%s" % (texto)
try:
return videos(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def videos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
JSONData = json.load(data)
for Video in JSONData["videos"]:
thumbnail = "http://img.beeg.com/236x177/" + str(Video["id"]) + ".jpg"
url= '%s/video/%s?v=2&s=%s&e=%s' % (url_api, Video['svid'], Video['start'], Video['end'])
title = Video["title"]
itemlist.append(
Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail, plot="", show="",
folder=True, contentType="movie"))
# Paginador
Actual = int(scrapertools.find_single_match(item.url, url_api + '/index/[^/]+/([0-9]+)/pc'))
if JSONData["pages"] - 1 > Actual:
scrapedurl = item.url.replace("/" + str(Actual) + "/", "/" + str(Actual + 1) + "/")
itemlist.append(
Item(channel=item.channel, action="videos", title="Página Siguiente", url=scrapedurl, thumbnail="",
viewmode="movie"))
return itemlist
def listcategorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
JSONData = json.load(data)
for Tag in JSONData["tags"]:
url = url_api + "/index/tag/0/pc?tag=" + Tag["tag"]
url = url.replace("%20", "-")
title = '%s (%s)' % (str(Tag["tag"]), str(Tag["videos"]))
itemlist.append(
Item(channel=item.channel, action="videos", title=title, url=url, viewmode="movie", type="item"))
return itemlist
def canal(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
JSONData = json.load(data)
for Tag in JSONData["channels"]:
url = url_api + "/index/channel/0/pc?channel=" + Tag["channel"]
url = url.replace("%20", "-")
title = '%s (%s)' % (str(Tag["ps_name"]), str(Tag["videos"]))
itemlist.append(
Item(channel=item.channel, action="videos", title=title, url=url, viewmode="movie", type="item"))
return itemlist
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
JSONData = json.load(data)
for key in JSONData:
videourl = re.compile("([0-9]+p)", re.DOTALL).findall(key)
if videourl:
videourl = videourl[0]
if not JSONData[videourl] == None:
url = JSONData[videourl]
url = url.replace("{DATA_MARKERS}", "data=pc.ES")
if not url.startswith("https:"): url = "https:" + url
title = videourl
itemlist.append(["%s %s [directo]" % (title, url[-4:]), url])
itemlist.sort(key=lambda item: item[0])
return itemlist

View File

@@ -1,13 +1,14 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://www.bravoporn.com'

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://www.camwhoresbay.com'
@@ -65,7 +66,7 @@ def lista(item):
for scrapedurl,scrapedtitle,scrapedthumbnail,scrapedtime in matches:
url = urlparse.urljoin(item.url,scrapedurl)
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
thumbnail = "http:" + scrapedthumbnail + "|Referer=%s" % item.url
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail, plot=plot,
contentTitle = scrapedtitle, fanart=thumbnail))
@@ -107,7 +108,7 @@ def play(item):
if scrapedurl == "" :
scrapedurl = scrapertools.find_single_match(data, 'video_url: \'([^\']+)\'')
itemlist.append(Item(channel=item.channel, action="play", title=scrapedurl, url=scrapedurl,
itemlist.append(Item(channel=item.channel, action="play", title=scrapedurl, fulltitle=item.title, url=scrapedurl,
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo"))
return itemlist

View File

@@ -1,9 +1,9 @@
# -*- coding: utf-8 -*-
import urlparse,re
from core import httptools
from core import scrapertools
from platformcode import logger
from platformcode import config
host = "http://www.canalporno.com"
@@ -11,21 +11,20 @@ host = "http://www.canalporno.com"
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(action="lista", title="Útimos videos", url=host + "/ajax/homepage/?page=1"))
itemlist.append(item.clone(action="categorias", title="Canal", url=host + "/ajax/list_producers/?page=1"))
itemlist.append(item.clone(action="categorias", title="PornStar", url=host + "/ajax/list_pornstars/?page=1"))
itemlist.append(item.clone(action="categorias", title="Categorias",
itemlist.append(item.clone(action="findvideos", title="Útimos videos", url=host))
itemlist.append(item.clone(action="categorias", title="Listado Categorias",
url=host + "/categorias"))
itemlist.append(item.clone(action="search", title="Buscar"))
itemlist.append(item.clone(action="search", title="Buscar", url=host + "/search/?q=%s"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = host + "/ajax/new_search/?q=%s&page=1" % texto
try:
return lista(item)
item.url = item.url % texto
itemlist = findvideos(item)
return sorted(itemlist, key=lambda it: it.title)
except:
import sys
for line in sys.exc_info():
@@ -33,60 +32,57 @@ def search(item, texto):
return []
def categorias(item):
def findvideos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
if "pornstars" in item.url:
patron = '<div class="muestra.*?href="([^"]+)".*?src=\'([^\']+)\'.*?alt="([^"]+)".*?'
else:
patron = '<div class="muestra.*?href="([^"]+)".*?src="([^"]+)".*?alt="([^"]+)".*?'
if "Categorias" in item.title:
patron += '<div class="numero">([^<]+)</div>'
else:
patron += '</span> (\d+) vídeos</div>'
patron = '<img src="([^"]+)".*?alt="([^"]+)".*?<h2><a href="([^"]+)">.*?' \
'<div class="duracion"><span class="ico-duracion sprite"></span> ([^"]+) min</div>'
matches = scrapertools.find_multiple_matches(data, patron)
for url, scrapedthumbnail, scrapedtitle, cantidad in matches:
title= "%s [COLOR yellow] %s [/COLOR]" % (scrapedtitle, cantidad)
url= url.replace("/videos-porno/", "/ajax/show_category/").replace("/sitio/", "/ajax/show_producer/").replace("/pornstar/", "/ajax/show_pornstar/")
url = host + url + "?page=1"
itemlist.append(item.clone(action="lista", title=title, url=url, thumbnail=scrapedthumbnail))
if "/?page=" in item.url:
next_page=item.url
num= int(scrapertools.find_single_match(item.url,".*?/?page=(\d+)"))
num += 1
next_page = "?page=" + str(num)
if next_page!="":
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append(item.clone(action="categorias", title="Página Siguiente >>", text_color="blue", url=next_page) )
for thumbnail, title, url, time in matches:
scrapedtitle = time + " - " + title
scrapedurl = host + url
scrapedthumbnail = thumbnail
itemlist.append(item.clone(action="play", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail))
patron = '<div class="paginacion">.*?<span class="selected">.*?<a href="([^"]+)">([^"]+)</a>'
matches = scrapertools.find_multiple_matches(data, patron)
for url, title in matches:
url = host + url
title = "Página %s" % title
itemlist.append(item.clone(action="findvideos", title=title, url=url))
return itemlist
def lista(item):
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = 'data-src="([^"]+)" alt="([^"]+)".*?<h2><a href="([^"]+)">.*?' \
'<div class="duracion"><span class="ico-duracion sprite"></span> ([^"]+) min</div>'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedthumbnail, scrapedtitle, scrapedurl, duration in matches:
title = "[COLOR yellow] %s [/COLOR] %s" % (duration, scrapedtitle)
url = host + scrapedurl
itemlist.append(item.clone(action="play", title=title, url=url, thumbnail=scrapedthumbnail))
last=scrapertools.find_single_match(item.url,'(.*?)page=\d+')
num= int(scrapertools.find_single_match(item.url,".*?/?page=(\d+)"))
num += 1
next_page = "page=" + str(num)
if next_page!="":
next_page = last + next_page
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page) )
bloque = scrapertools.find_single_match(data, '<ul class="ordenar-por ordenar-por-categoria">'
'(.*?)<\/ul>')
#patron = '<div class="muestra-categorias">.*?<a class="thumb" href="([^"]+)".*?<img class="categorias" src="([^"]+)".*?<div class="nombre">([^"]+)</div>'
patron = "<li><a href='([^']+)'\s?title='([^']+)'>.*?<\/a><\/li>"
matches = scrapertools.find_multiple_matches(bloque, patron)
for url, title in matches:
url = host + url
#thumbnail = "http:" + thumbnail
itemlist.append(item.clone(action="findvideos", title=title, url=url))
return itemlist
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
url = scrapertools.find_single_match(data, '<source src="([^"]+)"')
itemlist.append(item.clone(url=url, server="directo"))
return itemlist

View File

@@ -9,6 +9,6 @@
"banner": "https://i.imgur.com/bXUyk6m.png",
"categories": [
"movie",
"vos"
"vo"
]
}
}

View File

@@ -5,8 +5,7 @@
import re
import urllib
import urlparse
from channelselector import get_thumb
from core import httptools
from core import scrapertools
@@ -84,7 +83,7 @@ def search(item, texto):
logger.info()
if texto != "":
texto = texto.replace(" ", "+")
item.url = host + "search?q=" + texto
item.url = host + "/search?q=" + texto
item.extra = "busqueda"
try:
return list_all(item)

View File

@@ -9,8 +9,9 @@ from core import scrapertoolsV2, httptools, servertools, tmdb, support
from core.item import Item
from lib import unshortenit
from platformcode import logger, config
from specials import autoplay
#impostati dinamicamente da findhost()
#impostati dinamicamente da getUrl()
host = ""
headers = ""
@@ -43,18 +44,18 @@ blacklist = ['BENVENUTI', 'Richieste Serie TV', 'CB01.UNO &#x25b6; TROVA L&#8217
def mainlist(item):
findhost()
film = [
('HD', ['', 'menu', 'Film HD Streaming']),
('Generi', ['', 'menu', 'Film per Genere']),
('Anni', ['', 'menu', 'Film per Anno'])
]
tvshow = ['/serietv/',
('Per Lettera', ['/serietv/', 'menu', 'Serie-Tv per Lettera']),
('Per Genere', ['/serietv/', 'menu', 'Serie-Tv per Genere']),
('Per anno', ['/serietv/', 'menu', 'Serie-Tv per Anno'])
]
return locals()
autoplay.init(item.channel, list_servers, list_quality)
# Main options
itemlist = []
support.menu(itemlist, 'Ultimi 100 Film Aggiornati bold', 'last', host + '/lista-film-ultimi-100-film-aggiornati/')
support.menu(itemlist, 'Film bold', 'peliculas', host)
support.menu(itemlist, 'HD submenu', 'menu', host, args="Film HD Streaming")
support.menu(itemlist, 'Per genere submenu', 'menu', host, args="Film per Genere")
support.menu(itemlist, 'Per anno submenu', 'menu', host, args="Film per Anno")
support.menu(itemlist, 'Cerca film... submenu', 'search', host, args='film')
support.menu(itemlist, 'Serie TV bold', 'peliculas', host + '/serietv/', contentType='tvshow')
support.menu(itemlist, 'Aggiornamenti serie tv', 'last', host + '/serietv/aggiornamento-quotidiano-serie-tv/', contentType='tvshow')
@@ -65,9 +66,10 @@ def mainlist(item):
autoplay.show_option(item.channel, itemlist)
return locals()
return itemlist
def newest(categoria):
def menu(item):
findhost()
itemlist= []
data = httptools.downloadpage(item.url, headers=headers).data
@@ -92,21 +94,10 @@ def newest(categoria):
def search(item, text):
support.log(item.url, "search" ,text)
debug = True
item = Item()
item.contentType = 'movie'
item.url = host + '/lista-film-ultimi-100-film-aggiunti/'
patron = "<a href=(?P<url>[^>]+)>(?P<title>[^<([]+)(?:\[(?P<quality>[A-Z]+)\])?\s\((?P<year>[0-9]{4})\)<\/a>"
patronBlock = r'Ultimi 100 film aggiunti:.*?<\/td>'
return locals()
def search(item, text):
support.log(item.url, "search", text)
try:
item.url = item.url + "/?s=" + text.replace(' ', '+')
item.url = item.url + "/?s=" + text.replace(' ','+')
return peliculas(item)
# Continua la ricerca in caso di errore
@@ -116,6 +107,7 @@ def search(item, text):
logger.error("%s" % line)
return []
def newest(categoria):
findhost()
itemlist = []
@@ -191,22 +183,54 @@ def peliculas(item):
listGroups = ['thumb', 'url', 'title', 'quality', 'year', 'genre', 'duration', 'plot']
action = 'findvideos'
else:
patron = r'div class="card-image">.*?<img src="(?P<thumb>[^ ]+)" alt.*?<a href="(?P<url>[^ >]+)">(?P<title>[^<[(]+)<\/a>.*?<strong><span style="[^"]+">(?P<genre>[^<>0-9(]+)\((?P<year>[0-9]{4}).*?</(?:p|div)>(?P<plot>.*?)</div'
patron = r'div class="card-image">.*?<img src="([^ ]+)" alt.*?<a href="([^ >]+)">([^<[(]+)<\/a>.*?<strong><span style="[^"]+">([^<>0-9(]+)\(([0-9]{4}).*?</(?:p|div)>(.*?)</div'
listGroups = ['thumb', 'url', 'title', 'genre', 'year', 'plot']
action = 'episodios'
# patronBlock=[r'<div class="?sequex-page-left"?>(?P<block>.*?)<aside class="?sequex-page-right"?>',
# '<div class="?card-image"?>.*?(?=<div class="?card-image"?>|<div class="?rating"?>)']
patronNext='<a class="?page-link"? href="?([^>]+)"?><i class="fa fa-angle-right">'
return locals()
return support.scrape(item, patron_block=[r'<div class="?sequex-page-left"?>(.*?)<aside class="?sequex-page-right"?>',
'<div class="?card-image"?>.*?(?=<div class="?card-image"?>|<div class="?rating"?>)'],
patron=patron, listGroups=listGroups,
patronNext='<a class="?page-link"? href="?([^>]+)"?><i class="fa fa-angle-right">', blacklist=blacklist, action=action)
@support.scrape
def episodios(item):
patronBlock = r'(?P<block><div class="sp-head[a-z ]*?" title="Espandi">\s*STAGIONE [0-9]+ - (?P<lang>[^\s]+)(?: - (?P<quality>[^-<]+))?.*?[^<>]*?</div>.*?)<div class="spdiv">\[riduci\]</div>'
patron = '(?:<p>)(?P<episode>[0-9]+(?:&#215;|×)[0-9]+)(?P<url>.*?)(?:</p>|<br)'
itemlist = []
return locals()
data = httptools.downloadpage(item.url).data
matches = scrapertoolsV2.find_multiple_matches(data,
r'(<div class="sp-head[a-z ]*?" title="Espandi">[^<>]*?</div>.*?)<div class="spdiv">\[riduci\]</div>')
for match in matches:
support.log(match)
blocks = scrapertoolsV2.find_multiple_matches(match, '(?:<p>)(.*?)(?:</p>|<br)')
season = scrapertoolsV2.find_single_match(match, r'title="Espandi">.*?STAGIONE\s+\d+([^<>]+)').strip()
for block in blocks:
episode = scrapertoolsV2.find_single_match(block, r'([0-9]+(?:&#215;|×)[0-9]+)').strip()
seasons_n = scrapertoolsV2.find_single_match(block, r'<strong>STAGIONE\s+\d+([^<>]+)').strip()
if seasons_n:
season = seasons_n
if not episode: continue
season = re.sub(r'&#8211;|', "-", season)
itemlist.append(
Item(channel=item.channel,
action="findvideos",
contentType='episode',
title="[B]" + episode + "[/B] " + season,
fulltitle=episode + " " + season,
show=episode + " " + season,
url=block,
extra=item.extra,
thumbnail=item.thumbnail,
infoLabels=item.infoLabels
))
support.videolibrary(itemlist, item)
return itemlist
def findvideos(item):

View File

@@ -28,25 +28,46 @@ host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
@support.menu
def mainlist(item):
film = '/category/film/'
filmSub = [
('Generi', ['', 'genres']),
('Sport', ['/category/sport/', 'peliculas']),
]
tvshow = '/category/serie-tv/'
tvshowSub = [
('Anime ', ['/category/anime-giapponesi/', 'video'])
]
logger.info('[cinemalibero.py] mainlist')
autoplay.init(item.channel, list_servers, list_quality) # Necessario per Autoplay
# Menu Principale
itemlist = []
support.menu(itemlist, 'Film bold', 'video', host+'/category/film/')
support.menu(itemlist, 'Generi submenu', 'genres', host)
support.menu(itemlist, 'Cerca film submenu', 'search', host)
support.menu(itemlist, 'Serie TV bold', 'video', host+'/category/serie-tv/', contentType='episode')
support.menu(itemlist, 'Anime submenu', 'video', host+'/category/anime-giapponesi/', contentType='episode')
support.menu(itemlist, 'Cerca serie submenu', 'search', host, contentType='episode')
support.menu(itemlist, 'Sport bold', 'video', host+'/category/sport/')
autoplay.show_option(item.channel, itemlist) # Necessario per Autoplay (Menu Configurazione)
support.channel_config(item, itemlist)
return itemlist
def search(item, texto):
logger.info("[cinemalibero.py] " + item.url + " search " + texto)
item.url = host + "/?s=" + texto
try:
return video(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
return locals()
def genres(item):
return support.scrape2(item, patronBlock=r'<div id="bordobar" class="dropdown-menu(?P<block>.*)</li>', patron=r'<a class="dropdown-item" href="([^"]+)" title="([A-z]+)"', listGroups=['url', 'title'], action='video')
return support.scrape(item, patron_block=r'<div id="bordobar" class="dropdown-menu(.*?)</li>', patron=r'<a class="dropdown-item" href="([^"]+)" title="([A-z]+)"', listGroups=['url', 'title'], action='video')
def peliculas(item):
def video(item):
logger.info('[cinemalibero.py] video')
itemlist = []

View File

@@ -3,9 +3,8 @@
import os
import re
from core import scrapertools
from core import servertools
from core import httptools
from core import servertools
from core.item import Item
from platformcode import config, logger

View File

@@ -1,13 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://www.cliphunter.com'
@@ -84,8 +84,9 @@ def lista(item):
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
thumbnail = scrapedthumbnail
plot = ""
year = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail, plot=plot,
fanart=thumbnail, contentTitle = title ))
fanart=thumbnail, contentTitle = title, infoLabels={'year':year} ))
next_page = scrapertools.find_single_match(data,'<li class="arrow"><a rel="next" href="([^"]+)">&raquo;</a>')
if next_page!="":
next_page = urlparse.urljoin(item.url,next_page)
@@ -102,7 +103,7 @@ def play(item):
for scrapedurl in matches:
scrapedurl = scrapedurl.replace("\/", "/")
title = scrapedurl
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
itemlist.append(Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo"))
return itemlist

View File

@@ -1,15 +1,16 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
import re
from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
host ='http://www.coomelonitas.com'
def mainlist(item):
logger.info()
itemlist = []
@@ -56,7 +57,7 @@ def lista(item):
url = scrapertools.find_single_match(match,'<a href="([^"]+)"')
plot = scrapertools.find_single_match(match,'<p class="summary">(.*?)</p>')
thumbnail = scrapertools.find_single_match(match,'<img src="([^"]+)"')
itemlist.append( Item(channel=item.channel, action="findvideos", title=title, url=url,
itemlist.append( Item(channel=item.channel, action="findvideos", title=title, fulltitle=title, url=url,
fanart=thumbnail, thumbnail=thumbnail, plot=plot, viewmode="movie") )
next_page = scrapertools.find_single_match(data,'<a href="([^"]+)" class="siguiente">')
if next_page!="":

View File

@@ -2,6 +2,7 @@
import re
import urllib
import urlparse
from core import httptools
@@ -10,22 +11,23 @@ from core.item import Item
from platformcode import config, logger
host = 'https://www.cumlouder.com'
def mainlist(item):
logger.info()
itemlist = []
config.set_setting("url_error", False, "cumlouder")
itemlist.append(item.clone(title="Últimos videos", action="videos", url= host + "/porn/"))
itemlist.append(item.clone(title="Pornstars", action="pornstars_list", url=host + "/girls/"))
itemlist.append(item.clone(title="Listas", action="series", url= host + "/series/"))
itemlist.append(item.clone(title="Categorias", action="categorias", url= host + "/categories/"))
itemlist.append(item.clone(title="Buscar", action="search", url= host + "/search?q=%s"))
itemlist.append(item.clone(title="Últimos videos", action="videos", url="https://www.cumlouder.com/"))
itemlist.append(item.clone(title="Categorias", action="categorias", url="https://www.cumlouder.com/categories/"))
itemlist.append(item.clone(title="Pornstars", action="pornstars_list", url="https://www.cumlouder.com/girls/"))
itemlist.append(item.clone(title="Listas", action="series", url="https://www.cumlouder.com/series/"))
itemlist.append(item.clone(title="Buscar", action="search", url="https://www.cumlouder.com/search?q=%s"))
return itemlist
def search(item, texto):
logger.info()
item.url = item.url % texto
item.action = "videos"
try:
@@ -39,20 +41,21 @@ def search(item, texto):
def pornstars_list(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Mas Populares", action="pornstars", url=host + "/girls/1/"))
for letra in "abcdefghijklmnopqrstuvwxyz":
itemlist.append(item.clone(title=letra.upper(), url=urlparse.urljoin(item.url, letra), action="pornstars"))
return itemlist
def pornstars(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<a girl-url=.*?'
patron += 'href="([^"]+)" title="([^"]+)">.*?'
patron += 'data-lazy="([^"]+)".*?'
patron += '<span class="ico-videos sprite"></span>([^<]+)</span>'
data = get_data(item.url)
patron = '<a girl-url="[^"]+" class="[^"]+" href="([^"]+)" title="([^"]+)">[^<]+'
patron += '<img class="thumb" src="([^"]+)" [^<]+<h2[^<]+<span[^<]+</span[^<]+</h2[^<]+'
patron += '<span[^<]+<span[^<]+<span[^<]+</span>([^<]+)</span>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, count in matches:
if "go.php?" in url:
@@ -62,7 +65,8 @@ def pornstars(item):
url = urlparse.urljoin(item.url, url)
if not thumbnail.startswith("https"):
thumbnail = "https:%s" % thumbnail
itemlist.append(item.clone(title="%s (%s)" % (title, count), url=url, action="videos", fanart=thumbnail, thumbnail=thumbnail))
itemlist.append(item.clone(title="%s (%s)" % (title, count), url=url, action="videos", thumbnail=thumbnail))
# Paginador
matches = re.compile('<li[^<]+<a href="([^"]+)" rel="nofollow">Next[^<]+</a[^<]+</li>', re.DOTALL).findall(data)
if matches:
@@ -70,19 +74,18 @@ def pornstars(item):
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, matches[0])
itemlist.append(item.clone(title="Página Siguiente >>", url=url))
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
return itemlist
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = get_data(item.url)
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
patron = '<a tag-url=.*?'
patron += 'href="([^"]+)" title="([^"]+)".*?'
patron += 'data-lazy="([^"]+)".*?'
patron += '<span class="cantidad">([^<]+)</span>'
patron = '<a tag-url=.*?href="([^"]+)" title="([^"]+)".*?<img class="thumb" src="([^"]+)".*?<span class="cantidad">([^<]+)</span>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, count in matches:
if "go.php?" in url:
@@ -93,7 +96,8 @@ def categorias(item):
if not thumbnail.startswith("https"):
thumbnail = "https:%s" % thumbnail
itemlist.append(
item.clone(title="%s (%s videos)" % (title, count), url=url, action="videos", fanart=thumbnail, thumbnail=thumbnail))
item.clone(title="%s (%s videos)" % (title, count), url=url, action="videos", thumbnail=thumbnail))
# Paginador
matches = re.compile('<li[^<]+<a href="([^"]+)" rel="nofollow">Next[^<]+</a[^<]+</li>', re.DOTALL).findall(data)
if matches:
@@ -101,20 +105,22 @@ def categorias(item):
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, matches[0])
itemlist.append(item.clone(title="Página Siguiente >>", url=url))
return itemlist
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
return itemlist
def series(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = get_data(item.url)
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
patron = '<a onclick=.*?href="([^"]+)".*?\<img src="([^"]+)".*?h2 itemprop="name">([^<]+).*?p>([^<]+)</p>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, thumbnail, title, count in matches:
itemlist.append(
item.clone(title="%s (%s) " % (title, count), url=urlparse.urljoin(item.url, url), action="videos", fanart=thumbnail, thumbnail=thumbnail))
item.clone(title="%s (%s) " % (title, count), url=urlparse.urljoin(item.url, url), action="videos", thumbnail=thumbnail))
# Paginador
matches = re.compile('<li[^<]+<a href="([^"]+)" rel="nofollow">Next[^<]+</a[^<]+</li>', re.DOTALL).findall(data)
if matches:
@@ -122,33 +128,29 @@ def series(item):
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, matches[0])
itemlist.append(item.clone(title="Página Siguiente >>", url=url))
return itemlist
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
return itemlist
def videos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<a class="muestra-escena" href="([^"]+)" title="([^"]+)".*?'
patron += 'data-lazy="([^"]+)".*?'
patron += '<span class="ico-minutos sprite"></span>([^<]+)</span>(.*?)</a>'
data = get_data(item.url)
patron = '<a class="muestra-escena" href="([^"]+)" title="([^"]+)"[^<]+<img class="thumb" src="([^"]+)".*?<span class="minutos"> <span class="ico-minutos sprite"></span> ([^<]+)</span>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, duration,calidad in matches:
if "hd sprite" in calidad:
title="[COLOR yellow] %s [/COLOR][COLOR red] HD [/COLOR] %s" % (duration, title)
else:
title="[COLOR yellow] %s [/COLOR] %s" % (duration, title)
for url, title, thumbnail, duration in matches:
if "go.php?" in url:
url = urllib.unquote(url.split("/go.php?u=")[1].split("&")[0])
thumbnail = urllib.unquote(thumbnail.split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(host, url)
url = urlparse.urljoin("https://www.cumlouder.com", url)
if not thumbnail.startswith("https"):
thumbnail = "https:%s" % thumbnail
itemlist.append(item.clone(title=title, url=url,
itemlist.append(item.clone(title="%s (%s)" % (title, duration), url=urlparse.urljoin(item.url, url),
action="play", thumbnail=thumbnail, contentThumbnail=thumbnail,
fanart=thumbnail, contentType="movie", contentTitle=title))
contentType="movie", contentTitle=title))
# Paginador
nextpage = scrapertools.find_single_match(data, '<ul class="paginador"(.*?)</ul>')
matches = re.compile('<a href="([^"]+)" rel="nofollow">Next »</a>', re.DOTALL).findall(nextpage)
@@ -159,22 +161,51 @@ def videos(item):
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, matches[0])
itemlist.append(item.clone(title="Página Siguiente >>", url=url))
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
return itemlist
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<source src="([^"]+)" type=\'video/([^\']+)\' label=\'[^\']+\' res=\'([^\']+)\''
data = get_data(item.url)
patron = '<source src="([^"]+)" type=\'video/([^\']+)\' label=\'[^\']+\' res=\'([^\']+)\' />'
url, type, res = re.compile(patron, re.DOTALL).findall(data)[0]
if "go.php?" in url:
url = urllib.unquote(url.split("/go.php?u=")[1].split("&")[0])
elif not url.startswith("http"):
url = "https:" + url.replace("&amp;", "&")
url = "http:" + url.replace("&amp;", "&")
itemlist.append(
Item(channel='cumlouder', action="play", title='Video' + res, contentTitle=type.upper() + ' ' + res, url=url,
Item(channel='cumlouder', action="play", title='Video' + res, fulltitle=type.upper() + ' ' + res, url=url,
server="directo", folder=False))
return itemlist
def get_data(url_orig):
try:
if config.get_setting("url_error", "cumlouder"):
raise Exception
response = httptools.downloadpage(url_orig)
if not response.data or "urlopen error [Errno 1]" in str(response.code):
raise Exception
except:
config.set_setting("url_error", True, "cumlouder")
import random
server_random = ['nl', 'de', 'us']
server = server_random[random.randint(0, 2)]
url = "https://%s.hideproxy.me/includes/process.php?action=update" % server
post = "u=%s&proxy_formdata_server=%s&allowCookies=1&encodeURL=0&encodePage=0&stripObjects=0&stripJS=0&go=" \
% (urllib.quote(url_orig), server)
while True:
response = httptools.downloadpage(url, post, follow_redirects=False)
if response.headers.get("location"):
url = response.headers["location"]
post = ""
else:
break
return response.data

View File

@@ -1,7 +1,7 @@
{
"id": "czechvideo",
"name": "Czechvideo",
"active": false,
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "http://czechvideo.org/templates/Default/images/black75.png",

View File

@@ -1,13 +1,15 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
import re
import urlparse
from core import httptools
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://czechvideo.org'
@@ -80,7 +82,7 @@ def play(item):
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
videoitem.title = item.title
videoitem.contentTitle = item.contentTitle
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
return itemlist

View File

@@ -4,7 +4,8 @@ import re
from core import httptools
from core import scrapertools
from platformcode import config, logger
from platformcode import logger
from platformcode import config
def mainlist(item):
@@ -25,39 +26,49 @@ def search(item, texto):
def lista(item):
logger.info()
itemlist = []
# Descarga la pagina
data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)", "", httptools.downloadpage(item.url).data)
patron = '<div class="videobox">\s*<a href="([^"]+)".*?'
patron += 'url\(\'([^\']+)\'.*?'
patron += '<span>(.*?)<\/span>.*?'
patron += 'class="title">(.*?)<\/a>'
# Extrae las entradas
patron = '<div class="videobox">\s*<a href="([^"]+)".*?url\(\'([^\']+)\'.*?<span>(.*?)<\/span><\/div><\/a>.*?class="title">(.*?)<\/a><span class="views">.*?<\/a><\/span><\/div> '
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, duration, scrapedtitle in matches:
if "/embed-" not in scrapedurl:
#scrapedurl = scrapedurl.replace("dato.porn/", "dato.porn/embed-") + ".html"
scrapedurl = scrapedurl.replace("datoporn.co/", "datoporn.co/embed-") + ".html"
if not config.get_setting('unify'):
scrapedtitle = '[COLOR yellow] %s [/COLOR] %s' % (duration , scrapedtitle)
else:
scrapedtitle += ' gb'
scrapedtitle = "[COLOR yellow]" + duration + "[/COLOR] " + scrapedtitle
scrapedtitle = scrapedtitle.replace(":", "'")
# logger.debug(scrapedurl + ' / ' + scrapedthumbnail + ' / ' + duration + ' / ' + scrapedtitle)
itemlist.append(item.clone(action="play", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail, server="datoporn",
fanart=scrapedthumbnail.replace("_t.jpg", ".jpg"), plot = ""))
if duration:
scrapedtitle = "%s - %s" % (duration, scrapedtitle)
scrapedtitle += ' gb'
scrapedtitle = scrapedtitle.replace(":", "'")
#logger.debug(scrapedurl + ' / ' + scrapedthumbnail + ' / ' + duration + ' / ' + scrapedtitle)
itemlist.append(item.clone(action="play", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
server="datoporn", fanart=scrapedthumbnail.replace("_t.jpg", ".jpg")))
# Extrae la marca de siguiente página
#next_page = scrapertools.find_single_match(data, '<a href=["|\']([^["|\']+)["|\']>Next')
next_page = scrapertools.find_single_match(data, '<a class=["|\']page-link["|\'] href=["|\']([^["|\']+)["|\']>Next')
if next_page and itemlist:
itemlist.append(item.clone(action="lista", title=">> Página Siguiente", url=next_page))
return itemlist
def categorias(item):
logger.info()
itemlist = []
# Descarga la pagina
data = httptools.downloadpage(item.url).data
# Extrae las entradas (carpetas)
patron = '<div class="vid_block">\s*<a href="([^"]+)".*?url\((.*?)\).*?<span>(.*?)</span>.*?<b>(.*?)</b>'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, numero, scrapedtitle in matches:
if numero:
scrapedtitle = "%s (%s)" % (scrapedtitle, numero)
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail))
return itemlist
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail))
return itemlist

View File

@@ -1,37 +1,38 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import httptools
from core import scrapertools
from core import jsontools
from platformcode import logger
from platformcode import config
host = 'http://www.eporner.com'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Últimos videos", action="videos", url=host + "/0/"))
itemlist.append(item.clone(title="Más visto", action="videos", url=host + "/most-viewed/"))
itemlist.append(item.clone(title="Mejor valorado", action="videos", url=host + "/top-rated/"))
itemlist.append(item.clone(title="Categorias", action="categorias", url=host + "/categories/"))
itemlist.append(item.clone(title="Pornstars", action="pornstars", url=host + "/pornstars/"))
itemlist.append(item.clone(title=" Alfabetico", action="pornstars_list", url=host + "/pornstars/"))
itemlist.append(item.clone(title="Buscar", action="search"))
itemlist.append(item.clone(title="Últimos videos", action="videos", url="http://www.eporner.com/0/"))
itemlist.append(item.clone(title="Categorias", action="categorias", url="http://www.eporner.com/categories/"))
itemlist.append(item.clone(title="Pornstars", action="pornstars_list", url="http://www.eporner.com/pornstars/"))
itemlist.append(item.clone(title="Buscar", action="search", url="http://www.eporner.com/search/%s/"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "-")
item.url = host + "/search/%s/" % texto
item.url = item.url % texto
item.action = "videos"
try:
return videos(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
import traceback
logger.error(traceback.format_exc())
return []
@@ -40,67 +41,71 @@ def pornstars_list(item):
itemlist = []
for letra in "ABCDEFGHIJKLMNOPQRSTUVWXYZ":
itemlist.append(item.clone(title=letra, url=urlparse.urljoin(item.url, letra), action="pornstars"))
return itemlist
def pornstars(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<div class="mbprofile">.*?'
patron += '<a href="([^"]+)" title="([^"]+)">.*?'
patron += '<img src="([^"]+)".*?'
patron = '<div class="mbtit" itemprop="name"><a href="([^"]+)" title="([^"]+)">[^<]+</a></div> '
patron += '<a href="[^"]+" title="[^"]+"> <img src="([^"]+)" alt="[^"]+" style="width:190px;height:152px;" /> </a> '
patron += '<div class="mbtim"><span>Videos: </span>([^<]+)</div>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, count in matches:
itemlist.append(
item.clone(title="%s (%s videos)" % (title, count), url=urlparse.urljoin(item.url, url), action="videos",
thumbnail=thumbnail))
# Paginador
next_page = scrapertools.find_single_match(data,"<a href='([^']+)' class='nmnext' title='Next page'>")
if next_page!="":
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append(item.clone(action="pornstars", title="Página Siguiente >>", text_color="blue", url=next_page) )
# Paginador
patron = "<span style='color:#FFCC00;'>[^<]+</span></a> <a href='([^']+)' title='[^']+'><span>[^<]+</span></a>"
matches = re.compile(patron, re.DOTALL).findall(data)
if matches:
itemlist.append(item.clone(title="Pagina siguiente", url=urlparse.urljoin(item.url, matches[0])))
return itemlist
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<span class="addrem-cat">.*?'
patron += '<a href="([^"]+)" title="([^"]+)">.*?'
patron +='<div class="cllnumber">([^<]+)</div>'
patron = '<div class="categoriesbox" id="[^"]+"> <div class="ctbinner"> <a href="([^"]+)" title="[^"]+"> <img src="([^"]+)" alt="[^"]+"> <h2>([^"]+)</h2> </a> </div> </div>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, cantidad in matches:
url = urlparse.urljoin(item.url, url)
title = title + " " + cantidad
thumbnail = ""
if not thumbnail:
thumbnail = scrapertools.find_single_match(data,'<img src="([^"]+)" alt="%s"> % title')
itemlist.append(item.clone(title=title, url=url, action="videos", thumbnail=thumbnail))
for url, thumbnail, title in matches:
itemlist.append(
item.clone(title=title, url=urlparse.urljoin(item.url, url), action="videos", thumbnail=thumbnail))
return sorted(itemlist, key=lambda i: i.title)
def videos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<div class="mvhdico"><span>([^<]+)</span>.*?'
patron += '<a href="([^"]+)" title="([^"]+)" id="[^"]+">.*?'
patron += 'src="([^"]+)"[^>]+>.*?'
patron += '<div class="mbtim">([^<]+)</div>'
patron = '<a href="([^"]+)" title="([^"]+)" id="[^"]+">.*?<img id="[^"]+" src="([^"]+)"[^>]+>.*?<div class="mbtim">([^<]+)</div>'
matches = re.compile(patron, re.DOTALL).findall(data)
for quality, url, title, thumbnail, duration in matches:
title = "[COLOR yellow]" + duration + "[/COLOR] " + "[COLOR red]" + quality + "[/COLOR] " +title
itemlist.append(item.clone(title=title, url=urlparse.urljoin(item.url, url),
for url, title, thumbnail, duration in matches:
itemlist.append(item.clone(title="%s (%s)" % (title, duration), url=urlparse.urljoin(item.url, url),
action="play", thumbnail=thumbnail, contentThumbnail=thumbnail,
contentType="movie", contentTitle=title))
# Paginador
next_page = scrapertools.find_single_match(data,"<a href='([^']+)' class='nmnext' title='Next page'>")
if next_page!="":
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append(item.clone(action="videos", title="Página Siguiente >>", text_color="blue", url=next_page) )
patron = "<span style='color:#FFCC00;'>[^<]+</span></a> <a href='([^']+)' title='[^']+'><span>[^<]+</span></a>"
matches = re.compile(patron, re.DOTALL).findall(data)
if matches:
itemlist.append(item.clone(title="Página siguiente", url=urlparse.urljoin(item.url, matches[0])))
return itemlist
@@ -130,7 +135,8 @@ def play(item):
int(hash[16:24], 16)) + int_to_base36(int(hash[24:32], 16))
url = "https://www.eporner.com/xhr/video/%s?hash=%s" % (vid, hash)
jsondata = httptools.downloadpage(url).json
data = httptools.downloadpage(url).data
jsondata = jsontools.load(data)
for source in jsondata["sources"]["mp4"]:
url = jsondata["sources"]["mp4"][source]["src"]

View File

@@ -1,12 +1,14 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
import re
import urlparse
from core import httptools
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://www.eroticage.net'

View File

@@ -10,5 +10,13 @@
"adult"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}
}

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import httptools
@@ -8,6 +9,7 @@ from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import logger
from platformcode import config
host = "https://www.youfreeporntube.net"
@@ -15,12 +17,11 @@ def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, action="lista", title="Útimos videos",
url= host + "/newvideos.html?&page=1"))
itemlist.append(Item(channel=item.channel, action="lista", title="Populares",
url=host + "/topvideos.html?page=1"))
url= host + "/new-clips.html?&page=1"))
itemlist.append(
Item(channel=item.channel, action="categorias", title="Categorias", url=host + "/browse.html"))
itemlist.append(Item(channel=item.channel, action="lista", title="Populares",
url=host + "/topvideo.html?page=1"))
itemlist.append(Item(channel=item.channel, action="search", title="Buscar",
url=host + "/search.php?keywords="))
return itemlist
@@ -48,7 +49,7 @@ def categorias(item):
patron = '<div class="pm-li-category"><a href="([^"]+)">.*?.<h3>(.*?)</h3></a>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, actriz in matches:
itemlist.append(Item(channel=item.channel, action="lista", title=actriz, url=url))
itemlist.append(Item(channel=item.channel, action="listacategoria", title=actriz, url=url))
return itemlist
@@ -57,9 +58,7 @@ def lista(item):
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}", "", data)
patron = '<li><div class=".*?'
patron += '<a href="([^"]+)".*?'
patron += '<img src="([^"]+)".*?alt="([^"]+)"'
patron = '<li><div class=".*?<a href="([^"]+)".*?>.*?.img src="([^"]+)".*?alt="([^"]+)".*?>'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
@@ -67,14 +66,36 @@ def lista(item):
thumbnail = urlparse.urljoin(item.url, scrapedthumbnail)
title = scrapedtitle.strip()
itemlist.append(Item(channel=item.channel, action="play", thumbnail=thumbnail, fanart=thumbnail, title=title,
url=url,
fulltitle=title, url=url,
viewmode="movie", folder=True))
paginacion = scrapertools.find_single_match(data,
'<li class="active">.*?</li>.*?<a href="([^"]+)">')
'<li class="active"><a href="#" onclick="return false;">\d+</a></li><li class=""><a href="([^"]+)">')
if paginacion:
paginacion = urlparse.urljoin(item.url,paginacion)
itemlist.append(Item(channel=item.channel, action="lista", title=">> Página Siguiente",
url= paginacion))
url=host + "/" + paginacion))
return itemlist
def listacategoria(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}", "", data)
patron = '<li><div class=".*?<a href="([^"]+)".*?>.*?.img src="([^"]+)".*?alt="([^"]+)".*?>'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = urlparse.urljoin(item.url, scrapedthumbnail)
title = scrapedtitle.strip()
itemlist.append(
Item(channel=item.channel, action="play", thumbnail=thumbnail, title=title, fulltitle=title, url=url,
viewmode="movie", folder=True))
paginacion = scrapertools.find_single_match(data,
'<li class="active"><a href="#" onclick="return false;">\d+</a></li><li class=""><a href="([^"]+)">')
if paginacion:
itemlist.append(
Item(channel=item.channel, action="listacategoria", title=">> Página Siguiente", url=paginacion))
return itemlist
@@ -82,9 +103,14 @@ def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
url = scrapertools.find_single_match(data, '<div id="video-wrapper">.*?<iframe.*?src="([^"]+)"')
itemlist.append(item.clone(action="play", title=url, url=url ))
item.url = scrapertools.find_single_match(data, '(?i)Playerholder.*?src="([^"]+)"')
if "tubst.net" in item.url:
url = scrapertools.find_single_match(data, 'itemprop="embedURL" content="([^"]+)')
data = httptools.downloadpage(url).data
url = scrapertools.find_single_match(data, '<iframe.*?src="([^"]+)"')
data = httptools.downloadpage(url).data
url = scrapertools.find_single_match(data, '<source src="([^"]+)"')
item.url = httptools.downloadpage(url, follow_redirects=False, only_headers=True).headers.get("location", "")
itemlist.append(item.clone())
itemlist = servertools.get_servers_itemlist(itemlist)
return itemlist

View File

@@ -4,10 +4,18 @@
"active": true,
"adult": false,
"language": ["ita"],
"thumbnail": "eurostreaming.png",
"banner": "eurostreaming.png",
"thumbnail": "https://eurostreaming.cafe/wp-content/uploads/2017/08/logocafe.png",
"bannermenu": "https://eurostreaming.cafe/wp-content/uploads/2017/08/logocafe.png",
"categories": ["tvshow","anime","vosi"],
"settings": [
"settings": [
{
"id": "channel_host",
"type": "text",
"label": "Host del canale",
"default": "https://eurostreaming.cafe",
"enabled": true,
"visible": true
},
{
"id": "include_in_global_search",
"type": "bool",

View File

@@ -4,91 +4,146 @@
# by Greko
# ------------------------------------------------------------
"""
Riscritto per poter usufruire del modulo support.
Problemi noti:
Alcune sezioni di anime-cartoni non vanno, alcune hanno solo la lista degli episodi, ma non hanno link,
Le regex non prendono tutto...
server versystream : 'http://vcrypt.net/very/' # VeryS non decodifica il link :http://vcrypt.net/fastshield/
alcuni server tra cui nowvideo.club non sono implementati nella cartella servers
Alcune sezioni di anime-cartoni non vanno, alcune hanno solo la lista degli episodi, ma non hanno link
altre cambiano la struttura
La sezione novità non fa apparire il titolo degli episodi
In episodios è stata aggiunta la possibilità di configurare la videoteca
"""
import re
from core import scrapertoolsV2, httptools, support
import channelselector
from specials import autoplay, filtertools
from core import scrapertoolsV2, httptools, servertools, tmdb, support
from core.item import Item
from platformcode import logger, config
__channel__ = "eurostreaming"
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
headers = ['Referer', host]
list_servers = ['verystream', 'wstream', 'speedvideo', 'flashx', 'nowvideo', 'streamango', 'deltabit', 'openload']
list_quality = ['default']
__comprueba_enlaces__ = config.get_setting('comprueba_enlaces', 'eurostreaming')
__comprueba_enlaces_num__ = config.get_setting('comprueba_enlaces_num', 'eurostreaming')
IDIOMAS = {'Italiano': 'ITA', 'Sub-ITA':'vosi'}
list_language = IDIOMAS.values()
@support.menu
def mainlist(item):
support.log()
tvshow = [
('Archivio ', ['/category/serie-tv-archive/', 'peliculas', '', 'tvshow']),
('Aggiornamenti ', ['/aggiornamento-episodi/', 'peliculas', True, 'tvshow'])
]
anime = ['/category/anime-cartoni-animati/']
return locals()
#import web_pdb; web_pdb.set_trace()
support.log()
itemlist = []
support.menu(itemlist, 'Serie TV', 'serietv', host, contentType = 'tvshow') # mettere sempre episode per serietv, anime!!
support.menu(itemlist, 'Serie TV Archivio submenu', 'serietv', host + "/category/serie-tv-archive/", contentType = 'tvshow')
support.menu(itemlist, 'Ultimi Aggiornamenti submenu', 'serietv', host + '/aggiornamento-episodi/', args='True', contentType = 'tvshow')
support.menu(itemlist, 'Anime / Cartoni', 'serietv', host + '/category/anime-cartoni-animati/', contentType = 'tvshow')
support.menu(itemlist, 'Cerca...', 'search', host, contentType = 'tvshow')
## itemlist = filtertools.show_option(itemlist, item.channel, list_language, list_quality)
# richiesto per autoplay
autoplay.init(item.channel, list_servers, list_quality)
autoplay.show_option(item.channel, itemlist)
@support.scrape
def peliculas(item):
support.channel_config(item, itemlist)
return itemlist
def serietv(item):
#import web_pdb; web_pdb.set_trace()
# lista serie tv
support.log()
action = 'episodios'
if item.args == True:
patron = r'<span class="serieTitle" style="font-size:20px">(?P<title>.*?).[^]<a href="(?P<url>[^"]+)"'\
'\s+target="_blank">(?P<episode>\d+x\d+) (?P<title2>.*?)</a>'
# permette di vedere episodio e titolo + titolo2 in novità
def itemHook(item):
item.show = item.episode + item.title
return item
itemlist = []
if item.args:
# il titolo degli episodi viene inglobato in episode ma non sono visibili in newest!!!
patron = r'<span class="serieTitle" style="font-size:20px">(.*?).[^]<a href="([^"]+)"\s+target="_blank">(.*?)<\/a>'
listGroups = ['title', 'url', 'title2']
patronNext = ''
else:
patron = r'<div class="post-thumb">.*?\s<img src="(?P<thumb>[^"]+)".*?>'\
'<a href="(?P<url>[^"]+)".*?>(?P<title>.*?(?:\((?P<year>\d{4})\)|(\4\d{4}))?)<\/a><\/h2>'
patron = r'<div class="post-thumb">.*?\s<img src="([^"]+)".*?><a href="([^"]+)".*?>(.*?(?:\((\d{4})\)|(\d{4}))?)<\/a><\/h2>'
listGroups = ['thumb', 'url', 'title', 'year', 'year']
patronNext='a class="next page-numbers" href="?([^>"]+)">Avanti &raquo;</a>'
return locals()
@support.scrape
itemlist = support.scrape(item, patron_block='', patron=patron, listGroups=listGroups,
patronNext=patronNext, action='episodios')
return itemlist
def episodios(item):
support.log("episodios: %s" % item)
action = 'findvideos'
item.contentType = 'episode'
## import web_pdb; web_pdb.set_trace()
support.log("episodios")
itemlist = []
# Carica la pagina
data = httptools.downloadpage(item.url, headers=headers).data.replace("'", '"')
data = httptools.downloadpage(item.url).data
#========
if 'clicca qui per aprire' in data.lower():
item.url = scrapertoolsV2.find_single_match(data, '"go_to":"([^"]+)"')
item.url = item.url.replace("\\","")
# Carica la pagina
data = httptools.downloadpage(item.url, headers=headers).data.replace("'", '"')
data = httptools.downloadpage(item.url).data
elif 'clicca qui</span>' in data.lower():
item.url = scrapertoolsV2.find_single_match(data, '<h2 style="text-align: center;"><a href="([^"]+)">')
# Carica la pagina
data = httptools.downloadpage(item.url, headers=headers).data.replace("'", '"')
data = httptools.downloadpage(item.url).data
#=========
data = re.sub('\n|\t', ' ', data)
patronBlock = r'(?P<block>STAGIONE\s\d+ (?:\()?(?P<lang>ITA|SUB ITA)(?:\))?<\/div>.*?)</div></div>'
patron = r'(?:\s|\Wn)?(?:|<strong>)?(?P<episode>\d+&#\d+;\d+)(?:|</strong>) (?P<title>.*?)(?:|)?<a\s(?P<url>.*?)<\/a><br\s\/>'
patron = r'(?:<\/span>\w+ STAGIONE\s\d+ (?:\()?(ITA|SUB ITA)(?:\))?<\/div>'\
'<div class="su-spoiler-content su-clearfix" style="display:none">|'\
'(?:\s|\Wn)?(?:<strong>)?(\d+&#.*?)(?:|)?<a\s(.*?)<\/a><br\s\/>)'
## '(?:<\/span>\w+ STAGIONE\s\d+ (?:\()?(ITA|SUB ITA)(?:\))?'\
## '<\/div><div class="su-spoiler-content su-clearfix" style="display:none">|'\
## '(?:\s|\Wn)?(?:<strong>)?(\d[&#].*?)(?:|\W)?<a\s(.*?)<\/a><br\s\/>)'
## '(?:<\/span>\w+ STAGIONE\s\d+ (?:\()?(ITA|SUB ITA)(?:\))?<\/div>'\
## '<div class="su-spoiler-content su-clearfix" style="display:none">|'\
## '\s(?:<strong>)?(\d[&#].*?)<a\s(.*?)<\/a><br\s\/>)'
listGroups = ['lang', 'title', 'url']
itemlist = support.scrape(item, data=data, patron=patron,
listGroups=listGroups, action='findvideos')
return locals()
# Permette la configurazione della videoteca senza andare nel menu apposito
# così si possono Attivare/Disattivare le impostazioni direttamente dalla
# pagina delle puntate
itemlist.append(
Item(channel='setting',
action="channel_config",
title=support.typo("Configurazione Videoteca color lime"),
plot = 'Filtra per lingua utilizzando la configurazione della videoteca.\
Escludi i video in sub attivando "Escludi streams... " e aggiungendo sub in Parole',
config='videolibrary', #item.channel,
folder=False,
thumbnail=channelselector.get_thumb('setting_0.png')
))
itemlist = filtertools.get_links(itemlist, item, list_language)
return itemlist
# =========== def findvideos =============
def findvideos(item):
support.log('findvideos', item)
return support.server(item, item.url)
support.log()
itemlist =[]
# Requerido para FilterTools
## itemlist = filtertools.get_links(itemlist, item, list_language)
itemlist = support.server(item, item.url)
## support.videolibrary(itemlist, item)
return itemlist
# =========== def ricerca =============
def search(item, texto):
support.log()
item.url = "%s/?s=%s" % (host, texto)
item.contentType = 'tvshow'
try:
return peliculas(item)
return serietv(item)
# Continua la ricerca in caso di errore
except:
import sys
@@ -101,14 +156,14 @@ def newest(categoria):
support.log()
itemlist = []
item = Item()
item.contentType = 'tvshow'
item.args = True
item.contentType= 'episode'
item.args= 'True'
try:
item.url = "%s/aggiornamento-episodi/" % host
item.action = "peliculas"
itemlist = peliculas(item)
item.action = "serietv"
itemlist = serietv(item)
if itemlist[-1].action == "peliculas":
if itemlist[-1].action == "serietv":
itemlist.pop()
# Continua la ricerca in caso di errore
@@ -119,3 +174,6 @@ def newest(categoria):
return []
return itemlist
def paginator(item):
pass

View File

@@ -4,7 +4,7 @@
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "https://i.imgur.com/Orguh85.png",
"thumbnail": "",
"banner": "",
"categories": [
"adult"

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://fapality.com'
@@ -92,6 +93,6 @@ def play(item):
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl in matches:
url = scrapedurl
itemlist.append(item.clone(action="play", title=url, contentTitle = item.title, url=url))
itemlist.append(item.clone(action="play", title=url, fulltitle = item.title, url=url))
return itemlist

View File

@@ -38,8 +38,8 @@ def mainlist(item):
support.menu(itemlist, 'Novità bold', 'pelicuals_tv', host, 'tvshow')
support.menu(itemlist, 'Serie TV bold', 'lista_serie', host, 'tvshow')
('Archivio A-Z ', [, 'list_az', ]), 'tvshow', args=['serie'])
support.menu(itemlist, 'Archivio A-Z submenu', 'list_az', host, 'tvshow', args=['serie'])
support.menu(itemlist, 'Cerca', 'search', host, 'tvshow')
support.aplay(item, itemlist, list_servers, list_quality)
support.channel_config(item, itemlist)
@@ -208,13 +208,13 @@ def findvideos(item):
itemlist = []
# data = httptools.downloadpage(item.url, headers=headers).data
patronBlock = '<div class="entry-content">(?P<block>.*)<footer class="entry-footer">'
# bloque = scrapertools.find_single_match(data, patronBlock)
patron_block = '<div class="entry-content">(.*?)<footer class="entry-footer">'
# bloque = scrapertools.find_single_match(data, patron_block)
patron = r'<a href="([^"]+)">'
# matches = re.compile(patron, re.DOTALL).findall(bloque)
matches, data = support.match(item, patron, patronBlock, headers)
matches, data = support.match(item, patron, patron_block, headers)
for scrapedurl in matches:
if 'is.gd' in scrapedurl:

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://www.fetishshrine.com'

View File

@@ -31,8 +31,8 @@ def mainlist(item):
support.menu(itemlist, 'Film alta definizione bold', 'peliculas', host, contentType='movie', args='film')
support.menu(itemlist, 'Categorie Film bold', 'categorias_film', host , contentType='movie', args='film')
support.menu(itemlist, 'Categorie Serie bold', 'categorias_serie', host, contentType='tvshow', args='serie')
support.menu(itemlist, '[COLOR blue]Cerca Film...[/COLOR] bold', 'search', host, contentType='movie', args='film')
support.menu(itemlist, '[COLOR blue]Cerca Serie...[/COLOR] bold', 'search', host, contentType='tvshow', args='serie')
autoplay.init(item.channel, list_servers, list_quality)
autoplay.show_option(item.channel, itemlist)

View File

@@ -1,12 +1,15 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
import re
import urlparse
from core import httptools
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
from core import httptools
from platformcode import logger
from platformcode import config
# BLOQUEO ESET INTERNET SECURITY
def mainlist(item):
@@ -40,6 +43,7 @@ def play(item):
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
videoitem.title = item.title
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
return itemlist

View File

@@ -4,8 +4,8 @@
"active": true,
"adult": false,
"language": ["ita"],
"thumbnail": "",
"banner": "",
"thumbnail": "https://www.filmpertutti.club/wp-content/themes/blunge/assets/logo.png",
"banner": "https://www.filmpertutti.club/wp-content/themes/blunge/assets/logo.png",
"categories": ["tvshow","movie"],
"settings": [
{

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://es.foxtube.com'
@@ -14,9 +15,7 @@ def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Ultimos" , action="lista", url=host))
itemlist.append( Item(channel=item.channel, title="PornStar" , action="catalogo", url=host + '/actrices/'))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
@@ -34,31 +33,6 @@ def search(item, texto):
return []
def catalogo(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
patron = '<a class="tco5" href="([^"]+)">.*?'
patron += 'data-origen="([^"]+)" alt="([^"]+)"'
matches = re.compile(patron,re.DOTALL).findall(data)
scrapertools.printMatches(matches)
for scrapedurl,scrapedthumbnail,scrapedtitle in matches:
scrapedplot = ""
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot) )
# <a class="bgco2 tco3" rel="next" href="/actrices/2/">&gt</a>
next_page = scrapertools.find_single_match(data,'<a class="bgco2 tco3" rel="next" href="([^"]+)">&gt</a>')
if next_page!="":
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append(item.clone(action="lista" , title="Página Siguiente >>", text_color="blue", url=next_page) )
return itemlist
return itemlist
def categorias(item):
logger.info()
itemlist = []
@@ -80,8 +54,6 @@ def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
if "/actrices/" in item.url:
data=scrapertools.find_single_match(data,'<section class="container">(.*?)>Actrices similares</h3>')
patron = '<a class="thumb tco1" href="([^"]+)">.*?'
patron += 'src="([^"]+)".*?'
patron += 'alt="([^"]+)".*?'
@@ -99,7 +71,7 @@ def lista(item):
plot = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail,
fanart=thumbnail, plot=plot, contentTitle = contentTitle))
next_page = scrapertools.find_single_match(data,'<a class="bgco2 tco3" rel="next" href="([^"]+)">&gt</a>')
next_page = scrapertools.find_single_match(data,'<a class="bgco2 tco3" rel="next" href="([^"]+)">&gt</a>')
if next_page!="":
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append(item.clone(action="lista" , title="Página Siguiente >>", text_color="blue", url=next_page) )
@@ -110,14 +82,13 @@ def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
url = scrapertools.find_single_match(data,'<iframe title="video" src="([^"]+)"')
url = url.replace("https://flashservice.xvideos.com/embedframe/", "https://www.xvideos.com/video") + "/"
url = scrapertools.find_single_match(data,'<iframe src="([^"]+)"')
data = httptools.downloadpage(url).data
patron = 'html5player.setVideoHLS\\(\'([^\']+)\''
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl in matches:
scrapedurl = scrapedurl.replace("\/", "/")
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
itemlist.append(Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
return itemlist

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://frprn.com'
@@ -96,6 +97,6 @@ def play(item):
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl in matches:
title = scrapedurl
itemlist.append(item.clone(action="play", title=title, contentTitle = scrapedurl, url=scrapedurl))
itemlist.append(item.clone(action="play", title=title, fulltitle = scrapedurl, url=scrapedurl))
return itemlist

View File

@@ -1,14 +1,16 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
import re
import urlparse
from core import httptools
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://freepornstreams.org' #es http://xxxstreams.org
host = 'http://freepornstreams.org'
def mainlist(item):
@@ -16,8 +18,8 @@ def mainlist(item):
itemlist = []
itemlist.append( Item(channel=item.channel, title="Peliculas" , action="lista", url=host + "/free-full-porn-movies/"))
itemlist.append( Item(channel=item.channel, title="Videos" , action="lista", url=host + "/free-stream-porn/"))
itemlist.append( Item(channel=item.channel, title="Canal" , action="categorias", url=host))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host))
itemlist.append( Item(channel=item.channel, title="Canal" , action="catalogo", url=host))
itemlist.append( Item(channel=item.channel, title="Categoria" , action="categorias", url=host))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
@@ -35,24 +37,35 @@ def search(item, texto):
return []
def catalogo(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = scrapertools.find_single_match(data,'>Top Sites</a>(.*?)</aside>')
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
patron = '<li id="menu-item-\d+".*?<a href="([^"]+)">([^"]+)</a></li>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle in matches:
scrapedplot = ""
scrapedthumbnail = ""
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot) )
return itemlist
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = scrapertools.find_single_match(data,'Top Tags(.*?)</ul>')
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
if item.title == "Categorias" :
data = scrapertools.find_single_match(data,'>Top Tags(.*?)</ul>')
else:
data = scrapertools.find_single_match(data,'>Top Sites</a>(.*?)</aside>')
patron = '<a href="([^"]+)">(.*?)</a>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle in matches:
if not "Featured" in scrapedtitle:
scrapedplot = ""
scrapedthumbnail = ""
scrapedurl = scrapedurl.replace ("http://freepornstreams.org/freepornst/stout.php?s=100,75,65:*&#038;u=" , "")
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot) )
scrapedplot = ""
scrapedthumbnail = ""
scrapedurl = scrapedurl.replace ("http://freepornstreams.org/freepornst/stout.php?s=100,75,65:*&#038;u=" , "")
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot) )
return itemlist
@@ -66,15 +79,12 @@ def lista(item):
patron += '<img src="([^"]+)"'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle,scrapedthumbnail in matches:
if '/HD' in scrapedtitle : title= "[COLOR red]" + "HD" + "[/COLOR] " + scrapedtitle
elif 'SD' in scrapedtitle : title= "[COLOR red]" + "SD" + "[/COLOR] " + scrapedtitle
elif 'FullHD' in scrapedtitle : title= "[COLOR red]" + "FullHD" + "[/COLOR] " + scrapedtitle
elif '1080' in scrapedtitle : title= "[COLOR red]" + "1080p" + "[/COLOR] " + scrapedtitle
else: title = scrapedtitle
calidad = scrapertools.find_single_match(scrapedtitle, '(\(.*?\))')
title = "[COLOR yellow]" + calidad + "[/COLOR] " + scrapedtitle.replace( "%s" % calidad, "")
thumbnail = scrapedthumbnail.replace("jpg#", "jpg")
plot = ""
itemlist.append( Item(channel=item.channel, action="findvideos", title=title, url=scrapedurl, thumbnail=thumbnail,
fanart=thumbnail, plot=plot, contentTitle=title) )
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl, thumbnail=thumbnail,
fanart=thumbnail, plot=plot, fulltitle=title) )
next_page = scrapertools.find_single_match(data, '<div class="nav-previous"><a href="([^"]+)"')
if next_page!="":
next_page = urlparse.urljoin(item.url,next_page)
@@ -82,14 +92,14 @@ def lista(item):
return itemlist
def findvideos(item):
itemlist = []
def play(item):
logger.info()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|amp;|\s{2}|&nbsp;", "", data)
patron = '<a href="([^"]+)" rel="nofollow"[^<]+>(?:Streaming|Download)'
matches = scrapertools.find_multiple_matches(data, patron)
for url in matches:
if not "ubiqfile" in url:
itemlist.append(item.clone(action='play',title="%s", url=url))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
videoitem.title = item.fulltitle
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videochannel=item.channel
return itemlist

View File

@@ -1,15 +0,0 @@
{
"id": "gotporn",
"name": "gotporn",
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "https://cdn2-static-cf.gotporn.com/desktop/img/gotporn-logo.png",
"banner": "",
"categories": [
"adult"
],
"settings": [
]
}

View File

@@ -1,129 +0,0 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
host = 'https://www.gotporn.com'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/?page=1"))
itemlist.append( Item(channel=item.channel, title="Mejor valorados" , action="lista", url=host + "/top-rated?page=1"))
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/most-viewed?page=1"))
itemlist.append( Item(channel=item.channel, title="Longitud" , action="lista", url=host + "/longest?page=1"))
itemlist.append( Item(channel=item.channel, title="Canal" , action="catalogo", url=host + "/channels?page=1"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories"))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = host + "/results?search_query=%s" % texto
try:
return lista(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<a href="([^"]+)">'
patron += '<span class="text">([^<]+)</span>'
patron += '<span class="num">([^<]+)</span>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle,cantidad in matches:
scrapedplot = ""
scrapedtitle = "%s %s" % (scrapedtitle,cantidad)
scrapedurl = scrapedurl + "?page=1"
thumbnail = ""
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
thumbnail=thumbnail , plot=scrapedplot) )
return itemlist
def catalogo(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
logger.debug(data)
patron = '<header class="clearfix" itemscope>.*?'
patron += '<a href="([^"]+)".*?'
patron += '<img src="([^"]+)" alt="([^"]+)"'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedthumbnail,scrapedtitle in matches:
scrapedplot = ""
scrapedurl = scrapedurl + "?page=1"
thumbnail = "https:" + scrapedthumbnail
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
thumbnail=thumbnail , plot=scrapedplot) )
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" class="btn btn-secondary"><span class="text">Next')
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append( Item(channel=item.channel, action="catalogo", title="Página Siguiente >>", text_color="blue",
url=next_page) )
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<li class="video-item poptrigger".*?'
patron += 'href="([^"]+)" data-title="([^"]+)".*?'
patron += '<span class="duration">(.*?)</span>.*?'
patron += 'src=\'([^\']+)\'.*?'
patron += '<h3 class="video-thumb-title(.*?)"'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle,scrapedtime,scrapedthumbnail,quality in matches:
scrapedtime = scrapedtime.strip()
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
if quality:
title = "[COLOR yellow]%s[/COLOR] [COLOR red]HD[/COLOR] %s" % (scrapedtime,scrapedtitle)
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl, thumbnail=thumbnail,
fanart=thumbnail, plot=plot,))
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" class="btn btn-secondary')
if "categories" in item.url:
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" class="btn btn-secondary paginate-show-more')
if "search_query" in item.url:
next_page = scrapertools.find_single_match(data, '<link rel=\'next\' href="([^"]+)">')
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
url=next_page) )
return itemlist
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
patron = '<source src="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for url in matches:
url += "|Referer=%s" % host
itemlist.append(item.clone(action="play", title = item.title, url=url ))
return itemlist

View File

@@ -4,18 +4,15 @@
# Thanks to Icarus crew & Alfa addon & 4l3x87
# ------------------------------------------------------------
"""
Problemi noti:
- nella pagina categorie appaiono i risultati di tmdb in alcune voci
"""
import re
from core import scrapertoolsV2, httptools, support
from core import httptools, scrapertools, support
from core import tmdb
from core.item import Item
from platformcode import logger, config
from core.support import log
from platformcode import logger, config
__channel__ = 'guardaserieclick'
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
@@ -33,154 +30,34 @@ def mainlist(item):
itemlist = []
support.menu(itemlist, 'Serie', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args=['news'])
support.menu(itemlist, 'Ultimi Aggiornamenti submenu', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args= ['update'])
support.menu(itemlist, 'Categorie', 'categorie', host, 'tvshow', args=['cat'])
support.menu(itemlist, 'Serie inedite Sub-ITA submenu', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args=['inedite'])
support.menu(itemlist, 'Da non perdere bold submenu', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args=['tv', 'da non perdere'])
support.menu(itemlist, 'Classiche bold submenu', 'serietv', "%s/lista-serie-tv" % host, 'tvshow', args=['tv', 'classiche'])
support.menu(itemlist, 'Disegni che si muovono sullo schermo per magia bold', 'tvserie', "%s/category/animazione/" % host, 'tvshow', args= ['anime'])
# autoplay
support.menu(itemlist, 'Novità bold', 'serietvaggiornate', "%s/lista-serie-tv" % host, 'tvshow')
support.menu(itemlist, 'Nuove serie', 'nuoveserie', "%s/lista-serie-tv" % host, 'tvshow')
support.menu(itemlist, 'Serie inedite Sub-ITA', 'nuoveserie', "%s/lista-serie-tv" % host, 'tvshow', args=['inedite'])
support.menu(itemlist, 'Da non perdere bold', 'nuoveserie', "%s/lista-serie-tv" % host, 'tvshow', args=['tv', 'da non perdere'])
support.menu(itemlist, 'Classiche bold', 'nuoveserie', "%s/lista-serie-tv" % host, 'tvshow', args=['tv', 'classiche'])
support.menu(itemlist, 'Anime', 'lista_serie', "%s/category/animazione/" % host, 'tvshow')
support.menu(itemlist, 'Categorie', 'categorie', host, 'tvshow', args=['serie'])
support.menu(itemlist, 'Cerca', 'search', host, 'tvshow', args=['serie'])
support.aplay(item, itemlist, list_servers, list_quality)
# configurazione del canale
support.channel_config(item, itemlist)
return itemlist
@support.scrape
def serietv(item):
## import web_pdb; web_pdb.set_trace()
log('serietv ->\n')
##<<<<<<< HEAD
##
## action = 'episodios'
## listGroups = ['url', 'thumb', 'title']
## patron = r'<a href="([^"]+)".*?> <img\s.*?src="([^"]+)" \/>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<\/p>'
## if 'news' in item.args:
## patronBlock = r'<div class="container container-title-serie-new container-scheda" meta-slug="new">(.*?)</div></div><div'
## elif 'inedite' in item.args:
## patronBlock = r'<div class="container container-title-serie-ined container-scheda" meta-slug="ined">(.*?)</div></div><div'
## elif 'da non perdere' in item.args:
## patronBlock = r'<div class="container container-title-serie-danonperd container-scheda" meta-slug="danonperd">(.*?)</div></div><div'
## elif 'classiche' in item.args:
## patronBlock = r'<div class="container container-title-serie-classiche container-scheda" meta-slug="classiche">(.*?)</div></div><div'
## elif 'update' in item.args:
## listGroups = ['url', 'thumb', 'episode', 'lang', 'title']
## patron = r'rel="nofollow" href="([^"]+)"[^>]+> <img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(\d+.\d+) \((.+?)\).<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<[^>]+>'
## patronBlock = r'meta-slug="lastep">(.*?)</div></div><div'
## # permette di vedere episodio + titolo + titolo2 in novità
## def itemHook(item):
## item.show = item.episode + item.title
## return item
## return locals()
##
##@support.scrape
##def tvserie(item):
##
## action = 'episodios'
## listGroups = ['url', 'thumb', 'title']
## patron = r'<a\shref="([^"]+)".*?>\s<img\s.*?src="([^"]+)" />[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)</p></div>'
## patronBlock = r'<div\sclass="col-xs-\d+ col-sm-\d+-\d+">(.*?)<div\sclass="container-fluid whitebg" style="">'
## patronNext = r'<link\s.*?rel="next"\shref="([^"]+)"'
##
## return locals()
##
##@support.scrape
##def episodios(item):
## log('episodios ->\n')
## item.contentType = 'episode'
##
## action = 'findvideos'
## listGroups = ['episode', 'lang', 'title2', 'plot', 'title', 'url']
## patron = r'class="number-episodes-on-img"> (\d+.\d+)(?:|[ ]\((.*?)\))<[^>]+>'\
## '[^>]+>[^>]+>[^>]+>[^>]+>(.*?)<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>'\
## '(.*?)<[^>]+></div></div>.<span\s.+?meta-serie="(.*?)" meta-stag=(.*?)</span>'
##
## return locals()
##
##=======
action = 'episodios'
listGroups = ['url', 'thumb', 'title']
patron = r'<a href="([^"]+)".*?> <img\s.*?src="([^"]+)" \/>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<\/p>'
if 'news' in item.args:
patron_block = r'<div class="container container-title-serie-new container-scheda" meta-slug="new">(.*?)</div></div><div'
elif 'inedite' in item.args:
patron_block = r'<div class="container container-title-serie-ined container-scheda" meta-slug="ined">(.*?)</div></div><div'
elif 'da non perdere' in item.args:
patron_block = r'<div class="container container-title-serie-danonperd container-scheda" meta-slug="danonperd">(.*?)</div></div><div'
elif 'classiche' in item.args:
patron_block = r'<div class="container container-title-serie-classiche container-scheda" meta-slug="classiche">(.*?)</div></div><div'
elif 'update' in item.args:
listGroups = ['url', 'thumb', 'episode', 'lang', 'title']
patron = r'rel="nofollow" href="([^"]+)"[^>]+> <img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(\d+.\d+) \((.+?)\).<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<[^>]+>'
patron_block = r'meta-slug="lastep">(.*?)</div></div><div'
# permette di vedere episodio + titolo + titolo2 in novità
def itemHook(item):
item.show = item.episode + item.title
return item
return locals()
@support.scrape
def tvserie(item):
action = 'episodios'
listGroups = ['url', 'thumb', 'title']
patron = r'<a\shref="([^"]+)".*?>\s<img\s.*?src="([^"]+)" />[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)</p></div>'
patron_block = r'<div\sclass="col-xs-\d+ col-sm-\d+-\d+">(.*?)<div\sclass="container-fluid whitebg" style="">'
patronNext = r'<link\s.*?rel="next"\shref="([^"]+)"'
return locals()
@support.scrape
def episodios(item):
log('episodios ->\n')
item.contentType = 'episode'
action = 'findvideos'
listGroups = ['episode', 'lang', 'title2', 'plot', 'title', 'url']
patron = r'class="number-episodes-on-img"> (\d+.\d+)(?:|[ ]\((.*?)\))<[^>]+>'\
'[^>]+>[^>]+>[^>]+>[^>]+>(.*?)<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>'\
'(.*?)<[^>]+></div></div>.<span\s.+?meta-serie="(.*?)" meta-stag=(.*?)</span>'
return locals()
##>>>>>>> a72130e0324ae485ae5f39d3d8f1df46c365fa5b
def findvideos(item):
log()
return support.server(item, item.url)
@support.scrape
def categorie(item):
log
action = 'tvserie'
listGroups = ['url', 'title']
patron = r'<li>\s<a\shref="([^"]+)"[^>]+>([^<]+)</a></li>'
patron_block = r'<ul\sclass="dropdown-menu category">(.*?)</ul>'
return locals()
# ================================================================================================================
##
### ----------------------------------------------------------------------------------------------------------------
# ----------------------------------------------------------------------------------------------------------------
def newest(categoria):
log()
itemlist = []
item = Item()
item.contentType= 'episode'
item.args = 'update'
try:
if categoria == "series":
item.url = "%s/lista-serie-tv" % host
item.action = "serietv"
itemlist = serietv(item)
item.action = "serietvaggiornate"
itemlist = serietvaggiornate(item)
if itemlist[-1].action == "serietv":
if itemlist[-1].action == "serietvaggiornate":
itemlist.pop()
# Continua la ricerca in caso di errore
@@ -192,18 +69,207 @@ def newest(categoria):
return itemlist
### ================================================================================================================
### ----------------------------------------------------------------------------------------------------------------
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def search(item, texto):
log(texto)
item.url = host + "/?s=" + texto
item.args = 'cerca'
try:
return tvserie(item)
return lista_serie(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def cleantitle(scrapedtitle):
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle.strip()).replace('"', "'")
return scrapedtitle.strip()
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def nuoveserie(item):
log()
itemlist = []
patron_block = ''
if 'inedite' in item.args:
patron_block = r'<div class="container container-title-serie-ined container-scheda" meta-slug="ined">(.*?)</div></div><div'
elif 'da non perdere' in item.args:
patron_block = r'<div class="container container-title-serie-danonperd container-scheda" meta-slug="danonperd">(.*?)</div></div><div'
elif 'classiche' in item.args:
patron_block = r'<div class="container container-title-serie-classiche container-scheda" meta-slug="classiche">(.*?)</div></div><div'
else:
patron_block = r'<div class="container container-title-serie-new container-scheda" meta-slug="new">(.*?)</div></div><div'
patron = r'<a href="([^"]+)".*?><img\s.*?src="([^"]+)" \/>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<\/p>'
matches = support.match(item, patron, patron_block, headers)[0]
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
scrapedtitle = cleantitle(scrapedtitle)
itemlist.append(
Item(channel=item.channel,
action="episodios",
contentType="tvshow",
title=scrapedtitle,
fulltitle=scrapedtitle,
url=scrapedurl,
show=scrapedtitle,
thumbnail=scrapedthumbnail,
folder=True))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def serietvaggiornate(item):
log()
itemlist = []
patron_block = r'<div class="container\s*container-title-serie-lastep\s*container-scheda" meta-slug="lastep">(.*?)<\/div><\/div><div'
patron = r'<a rel="nofollow"\s*href="([^"]+)"[^>]+><img.*?src="([^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<[^>]+>'
matches = support.match(item, patron, patron_block, headers)[0]
for scrapedurl, scrapedthumbnail, scrapedep, scrapedtitle in matches:
episode = re.compile(r'^(\d+)x(\d+)', re.DOTALL).findall(scrapedep) # Prendo stagione ed episodioso
scrapedtitle = cleantitle(scrapedtitle)
contentlanguage = ""
if 'sub-ita' in scrapedep.strip().lower():
contentlanguage = 'Sub-ITA'
extra = r'<span\s.*?meta-stag="%s" meta-ep="%s" meta-embed="([^"]+)"\s.*?embed2="([^"]+)?"\s.*?embed3="([^"]+)?"[^>]*>' % (
episode[0][0], episode[0][1].lstrip("0"))
infoLabels = {}
infoLabels['episode'] = episode[0][1].zfill(2)
infoLabels['season'] = episode[0][0]
title = str(
"%s - %sx%s %s" % (scrapedtitle, infoLabels['season'], infoLabels['episode'], contentlanguage)).strip()
itemlist.append(
Item(channel=item.channel,
action="findepvideos",
contentType="tvshow",
title=title,
show=scrapedtitle,
fulltitle=scrapedtitle,
url=scrapedurl,
extra=extra,
thumbnail=scrapedthumbnail,
contentLanguage=contentlanguage,
infoLabels=infoLabels,
folder=True))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def categorie(item):
log()
return support.scrape(item, r'<li>\s<a\shref="([^"]+)"[^>]+>([^<]+)</a></li>', ['url', 'title'], patron_block=r'<ul\sclass="dropdown-menu category">(.*?)</ul>', headers=headers, action="lista_serie")
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def lista_serie(item):
log()
itemlist = []
patron_block = r'<div\sclass="col-xs-\d+ col-sm-\d+-\d+">(.*?)<div\sclass="container-fluid whitebg" style="">'
patron = r'<a\shref="([^"]+)".*?>\s<img\s.*?src="([^"]+)" />[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)</p></div>'
return support.scrape(item, patron, ['url', 'thumb', 'title'], patron_block=patron_block, patronNext=r"<link\s.*?rel='next'\shref='([^']*)'", action='episodios')
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def episodios(item):
log()
itemlist = []
patron = r'<div\sclass="[^"]+">\s([^<]+)<\/div>[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)?[^>]+>[^>]+>[^>]+>[^>]+>[^>]+><p[^>]+>([^<]+)<[^>]+>[^>]+>[^>]+>'
patron += r'[^"]+".*?serie="([^"]+)".*?stag="([0-9]*)".*?ep="([0-9]*)"\s'
patron += r'.*?embed="([^"]+)"\s.*?embed2="([^"]+)?"\s.*?embed3="([^"]+)?"?[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>\s?'
patron += r'(?:<img\sclass="[^"]+" meta-src="([^"]+)"[^>]+>|<img\sclass="[^"]+" src="" data-original="([^"]+)"[^>]+>)?'
matches = support.match(item, patron, headers=headers)[0]
for scrapedtitle, scrapedepisodetitle, scrapedplot, scrapedserie, scrapedseason, scrapedepisode, scrapedurl, scrapedurl2, scrapedurl3, scrapedthumbnail, scrapedthumbnail2 in matches:
scrapedtitle = cleantitle(scrapedtitle)
scrapedepisode = scrapedepisode.zfill(2)
scrapedepisodetitle = cleantitle(scrapedepisodetitle)
title = str("%sx%s %s" % (scrapedseason, scrapedepisode, scrapedepisodetitle)).strip()
if 'SUB-ITA' in scrapedtitle:
title += " "+support.typo("Sub-ITA", '_ [] color kod')
infoLabels = {}
infoLabels['season'] = scrapedseason
infoLabels['episode'] = scrapedepisode
itemlist.append(
Item(channel=item.channel,
action="findvideos",
title=support.typo(title, 'bold'),
fulltitle=scrapedtitle,
url=scrapedurl + "\r\n" + scrapedurl2 + "\r\n" + scrapedurl3,
contentType="episode",
plot=scrapedplot,
contentSerieName=scrapedserie,
contentLanguage='Sub-ITA' if 'Sub-ITA' in title else '',
infoLabels=infoLabels,
thumbnail=scrapedthumbnail2 if scrapedthumbnail2 != '' else scrapedthumbnail,
folder=True))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
support.videolibrary(itemlist, item)
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def findepvideos(item):
log()
data = httptools.downloadpage(item.url, headers=headers, ignore_response_code=True).data
matches = scrapertools.find_multiple_matches(data, item.extra)
data = "\r\n".join(matches[0])
item.contentType = 'movie'
return support.server(item, data=data)
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def findvideos(item):
log()
if item.contentType == 'tvshow':
data = httptools.downloadpage(item.url, headers=headers).data
matches = scrapertools.find_multiple_matches(data, item.extra)
data = "\r\n".join(matches[0])
else:
log(item.url)
data = item.url
return support.server(item, data)

View File

@@ -4,8 +4,8 @@
"active": true,
"adult": false,
"language": ["ita"],
"thumbnail": "",
"bannermenu": "",
"thumbnail": "https:\/\/guardogratis.com\/wp-content\/uploads\/2018\/01\/Logo-4.png",
"bannermenu": "https:\/\/guardogratis.com\/wp-content\/uploads\/2018\/01\/Logo-4.png",
"categories": ["movie","tvshow"],
"settings": [
{

View File

@@ -1,12 +1,14 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urllib
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://www.hclips.com'

View File

@@ -1,12 +1,14 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urllib
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://www.hdzog.com'

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://hellporno.com'
@@ -60,7 +61,7 @@ def lista(item):
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
patron = '<div class="video-thumb"><a href="([^"]+)" class="title".*?>([^"]+)</a>.*?'
patron += '<span class="time">([^<]+)</span>.*?'
patron += '<video muted poster="([^"]+)"'
patron += '<video poster="([^"]+)"'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle,duracion,scrapedthumbnail in matches:
url = scrapedurl
@@ -84,6 +85,6 @@ def play(item):
scrapedurl = scrapertools.find_single_match(data,'<source data-fluid-hd src="([^"]+)/?br=\d+"')
if scrapedurl=="":
scrapedurl = scrapertools.find_single_match(data,'<source src="([^"]+)/?br=\d+"')
itemlist.append(item.clone(action="play", title=scrapedurl, url=scrapedurl))
itemlist.append(item.clone(action="play", title=scrapedurl, fulltitle = item.title, url=scrapedurl))
return itemlist

View File

@@ -4,9 +4,9 @@
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "http://www.hentai-id.tv/wp-content/themes/moviescript/assets/img/logo.png",
"banner": "http://www.hentai-id.tv/wp-content/themes/moviescript/assets/img/background.jpg",
"thumbnail": "https://dl.dropboxusercontent.com/u/30248079/hentai_id.png",
"banner": "https://dl.dropboxusercontent.com/u/30248079/hentai_id2.png",
"categories": [
"adult"
]
}
}

View File

@@ -1,12 +1,14 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
CHANNEL_HOST = "http://hentai-id.tv/"
@@ -68,11 +70,11 @@ def series(item):
action = "episodios"
for url, thumbnail, title in matches:
contentTitle = title
fulltitle = title
show = title
# logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(title, url, thumbnail))
itemlist.append(Item(channel=item.channel, action=action, title=title, url=url, thumbnail=thumbnail,
show=show, fanart=thumbnail, folder=True))
show=show, fulltitle=fulltitle, fanart=thumbnail, folder=True))
if pagination:
page = scrapertools.find_single_match(pagination, '>(?:Page|Página)\s*(\d+)\s*(?:of|de)\s*\d+<')
@@ -104,7 +106,7 @@ def episodios(item):
# logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(title, url, thumbnail))
itemlist.append(Item(channel=item.channel, action="findvideos", title=title, url=url,
thumbnail=thumbnail, plot=plot,
thumbnail=thumbnail, plot=plot, show=item.show, fulltitle="%s %s" % (item.show, title),
fanart=thumbnail))
return itemlist
@@ -114,33 +116,20 @@ def findvideos(item):
logger.info()
data = httptools.downloadpage(item.url).data
video_urls = []
down_urls = []
patron = '<(?:iframe)?(?:IFRAME)?\s*(?:src)?(?:SRC)?="([^"]+)"'
matches = re.compile(patron, re.DOTALL).findall(data)
for url in matches:
if 'goo.gl' in url or 'tinyurl' in url:
if 'goo.gl' in url:
video = httptools.downloadpage(url, follow_redirects=False, only_headers=True).headers["location"]
video_urls.append(video)
else:
video_urls.append(url)
paste = scrapertools.find_single_match(data, 'https://gpaste.us/([a-zA-Z0-9]+)')
if paste:
try:
new_data = httptools.downloadpage('https://gpaste.us/'+paste).data
matches.remove(url)
matches.append(video)
bloq = scrapertools.find_single_match(new_data, 'id="input_text">(.*?)</div>')
matches = bloq.split('<br>')
for url in matches:
down_urls.append(url)
except:
pass
video_urls.extend(down_urls)
from core import servertools
itemlist = servertools.find_video_items(data=",".join(video_urls))
itemlist = servertools.find_video_items(data=",".join(matches))
for videoitem in itemlist:
videoitem.contentTitle = item.contentTitle
videoitem.fulltitle = item.fulltitle
videoitem.channel = item.channel
videoitem.thumbnail = item.thumbnail

View File

@@ -1,12 +1,14 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urllib
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://hotmovs.com'

View File

@@ -28,7 +28,7 @@ def mainlist(item):
support.menu(itemlist, 'Ultime Uscite', 'peliculas', host + "/category/serie-tv/", "episode")
support.menu(itemlist, 'Ultimi Episodi', 'peliculas', host + "/ultimi-episodi/", "episode", 'latest')
support.menu(itemlist, 'Categorie', 'menu', host, "episode", args="Serie-Tv per Genere")
support.menu(itemlist, 'Cerca...', 'search', host, 'episode', args='serie')
autoplay.init(item.channel, list_servers, [])
autoplay.show_option(item.channel, itemlist)

View File

@@ -109,7 +109,8 @@ def menu_info(item):
itemlist = []
video_urls, data = play(item.clone(extra="play_menu"))
itemlist.append(item.clone(action="play", title="Ver -- %s" % item.title, video_urls=video_urls))
matches = scrapertools.find_multiple_matches(data, '<a href="([^"]+)" class="item" rel="screenshots"')
bloque = scrapertools.find_single_match(data, '<div class="carousel-inner"(.*?)<div class="container">')
matches = scrapertools.find_multiple_matches(bloque, 'src="([^"]+)"')
for i, img in enumerate(matches):
if i == 0:
continue

View File

@@ -10,5 +10,13 @@
"adult"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": false
}
]
}

View File

@@ -6,6 +6,7 @@ from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
host = 'http://javus.net/'

View File

@@ -1,13 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://www.javwhores.com/'
@@ -74,13 +74,9 @@ def lista(item):
itemlist.append( Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail,
plot=plot, contentTitle = title))
next_page = scrapertools.find_single_match(data, '<li class="next"><a href="([^"]+)"')
if "#videos" in next_page:
next_page = scrapertools.find_single_match(data, 'data-parameters="sort_by:post_date;from:(\d+)">Next')
next = scrapertools.find_single_match(item.url, '(.*?/)\d+')
next_page = next + "%s/" % next_page
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append(item.clone(action="lista", title= next_page, text_color="blue", url=next_page ) )
itemlist.append(item.clone(action="lista", title="Página Siguiente >>" , text_color="blue", url=next_page ) )
return itemlist
@@ -96,7 +92,7 @@ def play(item):
if scrapedurl == "" :
scrapedurl = scrapertools.find_single_match(data, 'video_url: \'([^\']+)\'')
itemlist.append(Item(channel=item.channel, action="play", title=scrapedurl, url=scrapedurl,
itemlist.append(Item(channel=item.channel, action="play", title=scrapedurl, fulltitle=item.title, url=scrapedurl,
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
return itemlist

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://jizzbunker.com/es'
@@ -86,7 +87,7 @@ def play(item):
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl in matches:
scrapedurl = scrapedurl.replace("https", "http")
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
itemlist.append(Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
return itemlist

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
host = 'http://xxx.justporno.tv'
@@ -92,6 +93,10 @@ def lista(item):
next_page = "%s?mode=async&function=get_block&block_id=list_videos_common_videos_list" \
"&sort_by=post_date&from=%s" % (item.url, next_page)
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page))
# if next_page!="":
# next_page = urlparse.urljoin(item.url,next_page)
# itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page) )
return itemlist
@@ -104,6 +109,6 @@ def play(item):
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl in matches:
scrapedplot = ""
itemlist.append(item.clone(channel=item.channel, action="play", title=item.title , url=scrapedurl , plot="" , folder=True) )
itemlist.append(item.clone(channel=item.channel, action="play", title=scrapedurl , url=scrapedurl , plot="" , folder=True) )
return itemlist

View File

@@ -1,15 +0,0 @@
{
"id": "kingsizetits",
"name": "Kingsizetits",
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "http://cdn.images.kingsizetits.com/resources/kingsizetits.com/rwd_5/default/images/logo.png",
"banner": "",
"categories": [
"adult"
],
"settings": [
]
}

View File

@@ -1,95 +0,0 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
host = 'http://kingsizetits.com'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/most-recent/"))
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/most-viewed-week/"))
itemlist.append( Item(channel=item.channel, title="Mejor valorada" , action="lista", url=host + "/top-rated/"))
itemlist.append( Item(channel=item.channel, title="Mas largos" , action="lista", url=host + "/longest/"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories/"))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = host + "/search/videos/%s/" % texto
try:
return lista(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<a href="([^"]+)" class="video-box.*?'
patron += 'src=\'([^\']+)\' alt=\'([^\']+)\'.*?'
patron += 'data-video-count="(\d+)"'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedthumbnail,scrapedtitle,cantidad in matches:
scrapedplot = ""
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
title = scrapedtitle + " (" + cantidad + ")"
itemlist.append( Item(channel=item.channel, action="lista", title=title, url=scrapedurl,
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail, plot=scrapedplot) )
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<script>stat.*?'
patron += '<a href="([^"]+)".*?'
patron += 'src="([^"]+)".*?'
patron += '<span class="video-length">([^<]+)</span>.*?'
patron += '<span class="pic-name">([^<]+)</span>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedthumbnail,scrapedtime,scrapedtitle in matches:
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl,
fanart=thumbnail, thumbnail=thumbnail, plot=plot, contentTitle = scrapedtitle))
next_page = scrapertools.find_single_match(data, '<a class="btn default-btn page-next page-nav" href="([^"]+)"')
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
url=next_page) )
return itemlist
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
logger.debug(data)
url = scrapertools.find_single_match(data,'label:"\d+", file\:"([^"]+)"')
itemlist.append(item.clone(action="play", server="directo", url=url ))
return itemlist

View File

@@ -1,15 +0,0 @@
{
"id": "mangovideo",
"name": "mangovideo",
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "https://mangovideo.pw/images/logo.png",
"banner": "",
"categories": [
"adult"
],
"settings": [
]
}

View File

@@ -1,109 +0,0 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
server = {'1': 'https://www.mangovideo.pw/contents/videos', '7' : 'https://server9.mangovideo.pw/contents/videos/',
'8' : 'https://s10.mangovideo.pw/contents/videos/', '9' : 'https://server2.mangovideo.pw/contents/videos/',
'10' : 'https://server217.mangovideo.pw/contents/videos/', '11' : 'https://234.mangovideo.pw/contents/videos/'
}
host = 'http://mangovideo.pw'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/latest-updates/"))
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/most-popular/"))
itemlist.append( Item(channel=item.channel, title="Mejor valorada" , action="lista", url=host + "/top-rated/"))
itemlist.append( Item(channel=item.channel, title="Sitios" , action="categorias", url=host + "/sites/"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories/"))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = host + "/search/%s/" % texto
try:
return lista(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<a class="item" href="([^"]+)" title="([^"]+)".*?'
patron += '<div class="videos">(\d+) videos</div>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle,cantidad in matches:
scrapedplot = ""
scrapedthumbnail = ""
title = scrapedtitle + " (" + cantidad + ")"
itemlist.append( Item(channel=item.channel, action="lista", title=title, url=scrapedurl,
thumbnail=scrapedthumbnail , plot=scrapedplot) )
next_page = scrapertools.find_single_match(data, '<li class="next"><a href="([^"]+)"')
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append( Item(channel=item.channel, action="categorias", title="Página Siguiente >>", text_color="blue",
url=next_page) )
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<div class="item\s+">.*?'
patron += '<a href="([^"]+)" title="([^"]+)".*?'
patron += 'data-original="([^"]+)".*?'
patron += '<div class="duration">([^<]+)</div>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle,scrapedthumbnail,scrapedtime in matches:
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl,
thumbnail=thumbnail, fanart=thumbnail, plot=plot, contentTitle = scrapedtitle))
next_page = scrapertools.find_single_match(data, '<li class="next"><a href="([^"]+)"')
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
url=next_page) )
return itemlist
def play(item):
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|amp;|\s{2}|&nbsp;", "", data)
scrapedtitle = ""
patron = 'video_url: \'function/0/https://mangovideo.pw/get_file/(\d+)/\w+/(.*?)/\''
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedtitle,url in matches:
scrapedtitle = server.get(scrapedtitle, scrapedtitle)
url = scrapedtitle + url
if not scrapedtitle:
url = scrapertools.find_single_match(data, '<div class="embed-wrap".*?<iframe src="([^"]+)\?ref=')
itemlist.append(item.clone(action="play", title="%s", url=url))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
return itemlist

View File

@@ -30,11 +30,11 @@ def mainlist(item):
support.menu(itemlist, 'Sub ITA bold', 'carousel_subita', host, contentType='movie', args='movies')
support.menu(itemlist, 'Ultime Richieste Inserite bold', 'carousel_request', host, contentType='movie', args='movies')
support.menu(itemlist, 'Film Nelle Sale bold', 'carousel_cinema', host, contentType='movie', args='movies')
('Film Ultimi Inseriti ', [, 'carousel_last', 'movies'])
('Film Top ImDb ', ['/top-imdb/', 'top_imdb', 'movies'])
support.menu(itemlist, 'Serie TV', 'carousel_episodes', host, contentTyp='episode', args='tvshows')
('Serie TV Top ImDb ', ['/top-imdb/', 'top_serie', 'tvshows'])
support.menu(itemlist, 'Film Ultimi Inseriti submenu', 'carousel_last', host, contentType='movie', args='movies')
support.menu(itemlist, 'Film Top ImDb submenu', 'top_imdb', host + '/top-imdb/', contentType='movie', args='movies')
support.menu(itemlist, 'Serie TV', 'carousel_episodes', host, contentType='episode', args='tvshows')
support.menu(itemlist, 'Serie TV Top ImDb submenu', 'top_serie', host + '/top-imdb/', contentType='episode', args='tvshows')
support.menu(itemlist, '[COLOR blue]Cerca...[/COLOR] bold', 'search', host)
autoplay.init(item.channel, list_servers, list_quality)
autoplay.show_option(item.channel, itemlist)

View File

@@ -1,22 +1,23 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
host = 'http://mporno.tv'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Novedades" , action="lista", url=host + "/most-recent/"))
itemlist.append( Item(channel=item.channel, title="Mejor valoradas" , action="lista", url=host + "/top-rated/"))
itemlist.append( Item(channel=item.channel, title="Mas vistas" , action="lista", url=host + "/most-viewed/"))
itemlist.append( Item(channel=item.channel, title="Longitud" , action="lista", url=host + "/longest/"))
itemlist.append( Item(channel=item.channel, title="Novedades" , action="peliculas", url=host + "/most-recent/"))
itemlist.append( Item(channel=item.channel, title="Mejor valoradas" , action="peliculas", url=host + "/top-rated/"))
itemlist.append( Item(channel=item.channel, title="Mas vistas" , action="peliculas", url=host + "/most-viewed/"))
itemlist.append( Item(channel=item.channel, title="Longitud" , action="peliculas", url=host + "/longest/"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/channels/"))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
@@ -27,7 +28,7 @@ def search(item, texto):
texto = texto.replace(" ", "+")
item.url = host + "/search/videos/%s/page1.html" % texto
try:
return lista(item)
return peliculas(item)
except:
import sys
for line in sys.exc_info():
@@ -45,12 +46,12 @@ def categorias(item):
scrapedplot = ""
scrapedthumbnail = ""
scrapedtitle = scrapedtitle + " " + cantidad
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
itemlist.append( Item(channel=item.channel, action="peliculas", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail , plot=scrapedplot) )
return itemlist
def lista(item):
def peliculas(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
@@ -60,21 +61,15 @@ def lista(item):
for scrapedurl,scrapedtitle,scrapedthumbnail in matches:
contentTitle = scrapedtitle
title = scrapedtitle
scrapedurl = scrapedurl.replace("/thumbs/", "/videos/") + ".mp4"
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl, thumbnail=thumbnail,
fanart=thumbnail, plot=plot, server= "directo", contentTitle=contentTitle))
fanart=thumbnail, plot=plot, contentTitle=contentTitle))
next_page_url = scrapertools.find_single_match(data,'<a href=\'([^\']+)\' class="next">Next &gt;&gt;</a>')
if next_page_url!="":
next_page_url = urlparse.urljoin(item.url,next_page_url)
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page_url) )
itemlist.append(item.clone(action="peliculas", title="Página Siguiente >>", text_color="blue", url=next_page_url) )
return itemlist
def play(item):
logger.info()
itemlist = []
url = item.url.replace("/thumbs/", "/videos/") + ".mp4"
itemlist.append( Item(channel=item.channel, action="play", title= item.title, server= "directo", url=url))
return itemlist

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://www.pornburst.xxx'
@@ -42,21 +43,20 @@ def categorias(item):
if "/sites/" in item.url:
patron = '<div class="muestra-escena muestra-canales">.*?'
patron += 'href="([^"]+)">.*?'
patron += 'data-src="([^"]+)".*?'
patron += 'src="([^"]+)".*?'
patron += '<a title="([^"]+)".*?'
patron += '</span> (\d+) videos</span>'
if "/pornstars/" in item.url:
patron = '<a class="muestra-escena muestra-pornostar" href="([^"]+)">.*?'
patron += 'data-src="([^"]+)".*?'
patron += 'src="([^"]+)".*?'
patron += 'alt="([^"]+)".*?'
patron += '</span> (\d+) videos</span>'
else:
patron = '<a class="muestra-escena muestra-categoria" href="([^"]+)" title="[^"]+">.*?'
patron += 'data-src="([^"]+)".*?'
patron += 'src="([^"]+)".*?'
patron += '</span> ([^"]+) </h2>(.*?)>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedthumbnail,scrapedtitle,cantidad in matches:
logger.debug(scrapedurl + ' / ' + scrapedthumbnail + ' / ' + cantidad + ' / ' + scrapedtitle)
scrapedplot = ""
cantidad = " (" + cantidad + ")"
if "</a" in cantidad:
@@ -107,6 +107,6 @@ def play(item):
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl in matches:
title = scrapedurl
itemlist.append(item.clone(action="play", title=title, url=scrapedurl))
itemlist.append(item.clone(action="play", title=title, fulltitle = item.title, url=scrapedurl))
return itemlist

View File

@@ -2,11 +2,13 @@
import base64
import hashlib
import urlparse
from core import httptools
from core import scrapertools
from platformcode import logger
from platformcode import config
host = "https://www.nuvid.com"
@@ -51,7 +53,7 @@ def lista(item):
data = httptools.downloadpage(item.url, headers=header, cookies=False).data
# Extrae las entradas
patron = '<div class="box-tumb related_vid.*?href="([^"]+)" title="([^"]+)".*?src="([^"]+)"(.*?)<i class="time">([^<]+)<'
patron = '<div class="box-tumb related_vid">.*?href="([^"]+)" title="([^"]+)".*?src="([^"]+)"(.*?)<i class="time">([^<]+)<'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, quality, duration in matches:
scrapedurl = urlparse.urljoin(host, scrapedurl)
@@ -83,7 +85,7 @@ def categorias(item):
for cat, b in bloques:
cat = cat.replace("Straight", "Hetero")
itemlist.append(item.clone(action="", title=cat, text_color="gold"))
matches = scrapertools.find_multiple_matches(b, '<li>.*?href="([^"]+)" >(.*?)</span>')
matches = scrapertools.find_multiple_matches(b, '<li.*?href="([^"]+)">(.*?)</span>')
for scrapedurl, scrapedtitle in matches:
scrapedtitle = " " + scrapedtitle.replace("<span>", "")
scrapedurl = urlparse.urljoin(host, scrapedurl)

View File

@@ -1,17 +1,18 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
import urlparse
import re
import base64
from platformcode import config, logger
from core import scrapertools
from core import servertools
from core.item import Item
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
host = 'https://pandamovies.pw'
def mainlist(item):
logger.info()
itemlist = []
@@ -61,7 +62,7 @@ def lista(item):
data = httptools.downloadpage(item.url).data
patron = '<div data-movie-id="\d+".*?'
patron += '<a href="([^"]+)".*?oldtitle="([^"]+)".*?'
patron += '<img data-original="([^"]+)".*?'
patron += '<img src="([^"]+)".*?'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle, scrapedthumbnail in matches:
url = urlparse.urljoin(item.url, scrapedurl)
@@ -70,6 +71,7 @@ def lista(item):
plot = ""
itemlist.append(Item(channel=item.channel, action="findvideos", title=title, url=url, thumbnail=thumbnail,
fanart=thumbnail, plot=plot, contentTitle=title))
# <li class='active'><a class=''>1</a></li><li><a rel='nofollow' class='page larger' href='https://pandamovies.pw/movies/page/2'>
next_page = scrapertools.find_single_match(data, '<li class=\'active\'>.*?href=\'([^\']+)\'>')
if next_page == "":
next_page = scrapertools.find_single_match(data, '<a.*?href="([^"]+)" >Next &raquo;</a>')
@@ -77,34 +79,3 @@ def lista(item):
next_page = urlparse.urljoin(item.url, next_page)
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page))
return itemlist
def findvideos(item):
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|amp;|\s{2}|&nbsp;", "", data)
patron = '- on ([^"]+)" href="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedtitle,url in matches:
if 'aHR0' in url:
n = 3
while n > 0:
url= url.replace("https://vshares.tk/goto/", "").replace("https://waaws.tk/goto/", "").replace("https://openloads.tk/goto/", "")
logger.debug (url)
url = base64.b64decode(url)
n -= 1
if "mangovideo" in url: #Aparece como directo
data = httptools.downloadpage(url).data
patron = 'video_url: \'function/0/https://mangovideo.pw/get_file/(\d+)/\w+/(.*?)/\?embed=true\''
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedtitle,url in matches:
if scrapedtitle =="1": scrapedtitle= "https://www.mangovideo.pw/contents/videos/"
if scrapedtitle =="7": scrapedtitle= "https://server9.mangovideo.pw/contents/videos/"
if scrapedtitle =="8": scrapedtitle= "https://s10.mangovideo.pw/contents/videos/"
if scrapedtitle =="10": scrapedtitle= "https://server217.mangovideo.pw/contents/videos/"
if scrapedtitle =="11": scrapedtitle= "https://234.mangovideo.pw/contents/videos/"
url = scrapedtitle + url
itemlist.append( Item(channel=item.channel, action="play", title = "%s", url=url ))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
return itemlist

View File

@@ -10,5 +10,13 @@
"adult"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
}
]
}

View File

@@ -1,85 +1,63 @@
# -*- coding: utf-8 -*-
import urlparse
import re
from platformcode import config, logger
from core import httptools
from core import scrapertools
from core import servertools
from platformcode import logger
from platformcode import config
host = 'http://www.pelisxporno.com'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(action="lista", title="Novedades", url= host + "/?order=date"))
itemlist.append(item.clone(action="categorias", title="Categorías", url=host + "/categorias/"))
itemlist.append(item.clone(action="search", title="Buscar"))
itemlist.append(item.clone(action="lista", title="Novedades", url="http://www.pelisxporno.com/?order=date"))
itemlist.append(item.clone(action="categorias", title="Categorías", url="http://www.pelisxporno.com/categorias/"))
itemlist.append(item.clone(action="search", title="Buscar", url="http://www.pelisxporno.com/?s=%s"))
return itemlist
def search(item, texto):
logger.info("")
texto = texto.replace(" ", "+")
item.url = host + "/?s=%s" % texto
try:
return lista(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def search(item, texto):
logger.info()
item.url = item.url % texto
return lista(item)
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<li class="cat-item cat-item-.*?"><a href="(.*?)".*?>(.*?)<'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle in matches:
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl))
return itemlist
def lista(item):
logger.info()
itemlist = []
# Descarga la pagina
data = httptools.downloadpage(item.url).data
# Extrae las entradas (carpetas)
patron = '<div class="video.".*?<a href="(.*?)" title="(.*?)">.*?<img src="(.*?)".*?\/>.*?duration.*?>(.*?)<'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, duration in matches:
if duration:
scrapedtitle = "[COLOR yellow]" + duration + "[/COLOR] " + scrapedtitle
scrapedtitle += " (%s)" % duration
itemlist.append(item.clone(action="findvideos", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
fanart=scrapedthumbnail))
# Extrae la marca de siguiente página
next_page = scrapertools.find_single_match(data, '<a class="nextpostslink" rel="next" href="([^"]+)"')
if next_page:
itemlist.append(item.clone(action="lista", title=">> Página Siguiente", url=next_page))
return itemlist
def findvideos(item):
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = scrapertools.find_single_match(data, '<div class="video_code">(.*?)<h3')
patron = '(?:src|SRC)="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl in matches:
if not 'mixdrop' in scrapedurl: #el base64 es netu.tv
url = "https://hqq.tv/player/embed_player.php?vid=RODE5Z2Hx3hO&autoplay=none"
else:
url = "https:" + scrapedurl
headers = {'Referer': item.url}
data = httptools.downloadpage(url, headers=headers).data
url = scrapertools.find_single_match(data, 'vsrc = "([^"]+)"')
url= "https:" + url
itemlist.append(item.clone(action="play", title = "%s", url=url ))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
return itemlist
# Descarga la pagina
data = httptools.downloadpage(item.url).data
# Extrae las entradas (carpetas)
patron = '<li class="cat-item cat-item-.*?"><a href="(.*?)".*?>(.*?)<'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle in matches:
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl))
return itemlist

View File

@@ -1,12 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://www.perfectgirls.net'

View File

@@ -10,5 +10,13 @@
"adult"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": false
}
]
}

View File

@@ -1,17 +1,18 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import httptools
from core import servertools
from core import scrapertools
from core.item import Item
from platformcode import logger
import base64
from platformcode import config
host = "https://watchfreexxx.net/"
def mainlist(item):
itemlist = []
@@ -21,17 +22,47 @@ def mainlist(item):
itemlist.append(Item(channel=item.channel, title="Escenas", action="lista",
url = urlparse.urljoin(host, "category/xxx-scenes/")))
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=host+'?s=',
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=host + '/?s=',
thumbnail='https://s30.postimg.cc/pei7txpa9/buscar.png',
fanart='https://s30.postimg.cc/pei7txpa9/buscar.png'))
return itemlist
def lista(item):
logger.info()
itemlist = []
if item.url == '': item.url = host
data = httptools.downloadpage(item.url).data
data = re.sub(r'\n|\r|\t|&nbsp;|<br>|\s{2,}', "", data)
patron = '<article id=.*?<a href="([^"]+)".*?<img data-src="([^"]+)" alt="([^"]+)"'
matches = re.compile(patron, re.DOTALL).findall(data)
for data_1, data_2, data_3 in matches:
url = data_1
thumbnail = data_2
title = data_3
itemlist.append(Item(channel=item.channel, action='findvideos', title=title, url=url, thumbnail=thumbnail))
#Paginacion
if itemlist != []:
actual_page_url = item.url
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)">Next</a>')
if next_page != '':
itemlist.append(Item(channel=item.channel, action="lista", title='Siguiente >>>', url=next_page,
thumbnail='https://s16.postimg.cc/9okdu7hhx/siguiente.png', extra=item.extra))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = item.url + texto
try:
if texto != '':
item.extra = 'Buscar'
@@ -43,58 +74,3 @@ def search(item, texto):
for line in sys.exc_info():
logger.error("%s" % line)
return []
def lista(item):
logger.info()
itemlist = []
if item.url == '': item.url = host
data = httptools.downloadpage(item.url).data
data = re.sub(r'\n|\r|\t|&nbsp;|<br>|\s{2,}', "", data)
patron = '<article id=.*?<a href="([^"]+)".*?<img data-src="([^"]+)" alt="([^"]+)"'
matches = re.compile(patron, re.DOTALL).findall(data)
for data_1, data_2, data_3 in matches:
url = data_1
thumbnail = data_2
title = data_3
itemlist.append(Item(channel=item.channel, action='findvideos', title=title, url=url, thumbnail=thumbnail))
#Paginacion
if itemlist != []:
actual_page_url = item.url
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)">Next</a>')
if next_page != '':
itemlist.append(Item(channel=item.channel, action="lista", title='Siguiente >>>', url=next_page,
thumbnail='https://s16.postimg.cc/9okdu7hhx/siguiente.png', extra=item.extra))
return itemlist
def findvideos(item):
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|amp;|\s{2}|&nbsp;", "", data)
patron = '- on ([^"]+)" href="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedtitle,url in matches:
if "tk/goto/" in url:
n = 3
while n > 0:
url= url.replace("https://vshares.tk/goto/", "").replace("https://waaws.tk/goto/", "").replace("https://openloads.tk/goto/", "")
logger.debug (url)
url = base64.b64decode(url)
n -= 1
if "mangovideo" in url: #Aparece como directo
data = httptools.downloadpage(url).data
patron = 'video_url: \'function/0/https://mangovideo.pw/get_file/(\d+)/\w+/(.*?)/\?embed=true\''
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedtitle,url in matches:
if scrapedtitle =="1": scrapedtitle= "https://www.mangovideo.pw/contents/videos/"
if scrapedtitle =="7": scrapedtitle= "https://server9.mangovideo.pw/contents/videos/"
if scrapedtitle =="8": scrapedtitle= "https://s10.mangovideo.pw/contents/videos/"
if scrapedtitle =="10": scrapedtitle= "https://server217.mangovideo.pw/contents/videos/"
if scrapedtitle =="11": scrapedtitle= "https://234.mangovideo.pw/contents/videos/"
url = scrapedtitle + url
itemlist.append(item.clone(action="play", title = "%s", url=url ))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
return itemlist

View File

@@ -1,31 +1,35 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://www.porn300.com'
#BLOQUEO ANTIVIRUS STREAMCLOUD
def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Nuevas" , action="lista", url=host + "/en_US/ajax/page/list_videos/?page=1"))
itemlist.append( Item(channel=item.channel, title="Canal" , action="categorias", url=host + "/channels/?page=1"))
itemlist.append( Item(channel=item.channel, title="Pornstars" , action="categorias", url=host + "/pornstars/?page=1"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories/?page=1"))
itemlist.append( Item(channel=item.channel, title="Nuevas" , action="lista", url=host + "/es/videos/"))
itemlist.append( Item(channel=item.channel, title="Mas Vistas" , action="lista", url=host + "/es/mas-vistos/"))
itemlist.append( Item(channel=item.channel, title="Mejor valorada" , action="lista", url=host + "/es/mas-votados/"))
itemlist.append( Item(channel=item.channel, title="Canal" , action="categorias", url=host + "/es/canales/?page=1"))
itemlist.append( Item(channel=item.channel, title="Pornstars" , action="categorias", url=host + "/es/pornostars/?page=1"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/es/categorias/"))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
# view-source:https://www.porn300.com/en_US/ajax/page/show_search?q=big+tit&page=1
# https://www.porn300.com/en_US/ajax/page/show_search?page=2
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = host + "/en_US/ajax/page/show_search?q=%s&?page=1" % texto
item.url = host + "/es/buscar/?q=%s" % texto
try:
return lista(item)
except:
@@ -40,18 +44,20 @@ def categorias(item):
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
patron = '<a itemprop="url" href="/([^"]+)".*?'
patron += 'data-src="([^"]+)" alt=.*?'
patron += 'itemprop="name">([^<]+)</h3>.*?'
patron += '</svg>([^<]+)<'
patron = '<a itemprop="url" href="([^"]+)".*?'
patron += 'title="([^"]+)">.*?'
if "/pornostars/" in item.url:
patron += '<img itemprop="image" src=([^"]+) alt=.*?'
patron += '</svg>([^<]+)<'
else:
patron += '<img itemprop="image" src="([^"]+)" alt=.*?'
patron += '</svg>([^<]+)<'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedthumbnail,scrapedtitle,cantidad in matches:
for scrapedurl,scrapedtitle,scrapedthumbnail,cantidad in matches:
scrapedplot = ""
cantidad = re.compile("\s+", re.DOTALL).sub(" ", cantidad)
scrapedtitle = scrapedtitle + " (" + cantidad +")"
scrapedurl = scrapedurl.replace("channel/", "producer/")
scrapedurl = "/en_US/ajax/page/show_" + scrapedurl + "?page=1"
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
scrapedurl = urlparse.urljoin(item.url,scrapedurl) + "/?sort=latest"
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail, plot=scrapedplot) )
next_page = scrapertools.find_single_match(data,'<link rel="next" href="([^"]+)" />')
@@ -69,29 +75,22 @@ def lista(item):
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
patron = '<a itemprop="url" href="([^"]+)".*?'
patron += 'data-src="([^"]+)".*?'
patron += 'itemprop="name">([^<]+)<.*?'
patron = '<a itemprop="url" href="([^"]+)" data-video-id="\d+" title="([^"]+)">.*?'
patron += '<img itemprop="thumbnailUrl" src="([^"]+)".*?'
patron += '</svg>([^<]+)<'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedthumbnail,scrapedtitle,scrapedtime in matches:
for scrapedurl,scrapedtitle,scrapedthumbnail,cantidad in matches:
url = urlparse.urljoin(item.url,scrapedurl)
scrapedtime = scrapedtime.strip()
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
cantidad = re.compile("\s+", re.DOTALL).sub(" ", cantidad)
title = "[COLOR yellow]" + cantidad + "[/COLOR] " + scrapedtitle
contentTitle = title
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="play" , title=title , url=url, thumbnail=thumbnail,
fanart=thumbnail, plot=plot, contentTitle = contentTitle) )
prev_page = scrapertools.find_single_match(item.url,"(.*?)page=\d+")
num= int(scrapertools.find_single_match(item.url,".*?page=(\d+)"))
num += 1
num_page = "?page=" + str(num)
if num_page!="":
next_page = urlparse.urljoin(item.url,num_page)
if "show_search" in next_page:
next_page = prev_page + num_page
next_page = next_page.replace("&?", "&")
next_page = scrapertools.find_single_match(data,'<link rel="next" href="([^"]+)" />')
if next_page!="":
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page) )
return itemlist
@@ -102,6 +101,6 @@ def play(item):
patron = '<source src="([^"]+)"'
matches = re.compile(patron,re.DOTALL).findall(data)
for url in matches:
itemlist.append(item.clone(action="play", title=url, url=url))
itemlist.append(item.clone(action="play", title=url, fulltitle = item.title, url=url))
return itemlist

View File

@@ -1,13 +1,13 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from core import jsontools as json
import re
from core import httptools
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://pornboss.org'
@@ -15,11 +15,13 @@ def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Peliculas" , action="lista", url=host + "/category/movies/"))
itemlist.append( Item(channel=item.channel, title=" categorias" , action="categorias", url=host + "/category/movies/"))
itemlist.append( Item(channel=item.channel, title="Videos" , action="lista", url=host + "/category/clips/"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
itemlist.append( Item(channel=item.channel, title=" categorias" , action="categorias", url=host + "/category/clips/"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
@@ -38,9 +40,13 @@ def categorias(item):
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
data = scrapertools.find_single_match(data,'<div class="uk-panel uk-panel-box widget_nav_menu">(.*?)</ul>')
patron = '<li><a href=(.*?) class>([^<]+)</a>'
if "/category/movies/" in item.url:
data = scrapertools.find_single_match(data,'>Movies</a>(.*?)</ul>')
else:
data = scrapertools.find_single_match(data,'>Clips</a>(.*?)</ul>')
patron = '<a href=([^"]+)>([^"]+)</a>'
matches = re.compile(patron,re.DOTALL).findall(data)
scrapertools.printMatches(matches)
for scrapedurl,scrapedtitle in matches:
scrapedplot = ""
scrapedthumbnail = ""
@@ -53,39 +59,29 @@ def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<article id=item-\d+.*?'
patron += '<img class=.*?src=(.*?) alt="([^"]+)".*?'
patron += 'Duration:</strong>(.*?) / <strong>.*?'
patron += '>SHOW<.*?href=([^"]+) target='
patron = '<article id=post-\d+.*?'
patron += '<img class="center cover" src=([^"]+) alt="([^"]+)".*?'
patron += '<blockquote><p> <a href=(.*?) target=_blank'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedthumbnail,scrapedtitle,duration,scrapedurl in matches:
scrapertools.printMatches(matches)
for scrapedthumbnail,scrapedtitle,scrapedurl in matches:
scrapedplot = ""
title = "[COLOR yellow]" + duration + "[/COLOR] " + scrapedtitle
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl,
itemlist.append( Item(channel=item.channel, action="play", title=scrapedtitle, url=scrapedurl,
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail, plot=scrapedplot) )
next_page = scrapertools.find_single_match(data,'<li><a href=([^<]+)><i class=uk-icon-angle-double-right>')
next_page = next_page.replace('"', '')
next_page = scrapertools.find_single_match(data,'<a class=nextpostslink rel=next href=(.*?)>')
if next_page!="":
itemlist.append(item.clone(action="lista", title="Página Siguiente >>", text_color="blue", url=next_page) )
return itemlist
def play(item):
logger.info()
itemlist = []
if "streamcloud" in item.url:
itemlist.append(item.clone(action="play", url=item.url ))
else:
data = httptools.downloadpage(item.url).data
url=scrapertools.find_single_match(data,'<span class="bottext">Streamcloud.eu</span>.*?href="([^"]+)"')
url= "https://tolink.to" + url
data = httptools.downloadpage(url).data
patron = '<input type="hidden" name="id" value="([^"]+)">.*?'
patron += '<input type="hidden" name="fname" value="([^"]+)">'
matches = re.compile(patron,re.DOTALL).findall(data)
for id, url in matches:
url= "http://streamcloud.eu/" + id
itemlist.append(item.clone(action="play", url=url ))
itemlist = servertools.get_servers_itemlist(itemlist)
data = httptools.downloadpage(item.url).data
itemlist = servertools.find_video_items(data=item.url)
for videoitem in itemlist:
videoitem.title = item.title
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
return itemlist

View File

@@ -1,15 +0,0 @@
{
"id": "porndish",
"name": "porndish",
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "https://www.porndish.com/wp-content/uploads/2015/09/logo.png",
"banner": "",
"categories": [
"adult"
],
"settings": [
]
}

View File

@@ -1,78 +0,0 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
host = 'https://www.porndish.com'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host))
itemlist.append( Item(channel=item.channel, title="Canal" , action="categorias", url=host))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = host + "/?s=%s" % texto
try:
return lista(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<li id="menu-item-\d+".*?'
patron += '<a href="([^"]+)">([^<]+)<'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle in matches:
scrapedplot = ""
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
scrapedthumbnail = ""
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail , plot=scrapedplot) )
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
data = scrapertools.find_single_match(data, 'archive-body">(.*?)<div class="g1-row g1-row-layout-page g1-prefooter">')
patron = '<article class=.*?'
patron += 'src="([^"]+)".*?'
patron += 'title="([^"]+)".*?'
patron += '<a href="([^"]+)" rel="bookmark">'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedthumbnail,scrapedtitle,scrapedurl in matches:
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="findvideos", title=scrapedtitle, url=scrapedurl,
fanart=thumbnail, thumbnail=thumbnail, plot=plot, contentTitle = scrapedtitle))
next_page = scrapertools.find_single_match(data, '<a class="g1-delta g1-delta-1st next" href="([^"]+)">Next</a>')
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
url=next_page) )
return itemlist

View File

@@ -1,16 +1,12 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
import urlparse
import urllib2
import urllib
import re
import os
import sys
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
host = 'http://porneq.com'
@@ -21,7 +17,7 @@ def mainlist(item):
itemlist.append(Item(channel=item.channel, title="Ultimos", action="lista", url=host + "/videos/browse/"))
itemlist.append(Item(channel=item.channel, title="Mas Vistos", action="lista", url=host + "/videos/most-viewed/"))
itemlist.append(Item(channel=item.channel, title="Mas Votado", action="lista", url=host + "/videos/most-liked/"))
itemlist.append(Item(channel=item.channel, title="Big Tits", action="lista", url=host + "/show/big+tit"))
itemlist.append(Item(channel=item.channel, title="Big Tits", action="lista", url=host + "/show/big+tits&sort=w"))
itemlist.append(Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
@@ -50,7 +46,6 @@ def lista(item):
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedtitle, scrapedurl, scrapedthumbnail, scrapedtime in matches:
scrapedplot = ""
scrapedthumbnail = scrapedthumbnail.replace("https:", "http:")
scrapedtitle = "[COLOR yellow]" + (scrapedtime) + "[/COLOR] " + scrapedtitle
itemlist.append(Item(channel=item.channel, action="play", title=scrapedtitle, url=scrapedurl,
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail, plot=scrapedplot))
@@ -65,8 +60,7 @@ def play(item):
itemlist = []
data = httptools.downloadpage(item.url).data
scrapedurl = scrapertools.find_single_match(data, '<source src="([^"]+)"')
scrapedurl = scrapedurl.replace("X20", "-")
itemlist.append(
Item(channel=item.channel, action="play", title=item.title, url=scrapedurl,
Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=scrapedurl,
thumbnail=item.thumbnail, plot=item.plot, show=item.title, server="directo", folder=False))
return itemlist

View File

@@ -1,18 +1,17 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
import base64
import re
from core import httptools
from core import scrapertools
from core import servertools
from core.item import Item
from platformcode import config, logger
from core import httptools
from platformcode import logger
from platformcode import config
host = 'http://www.pornhive.tv/en'
# Algunos link caidos
def mainlist(item):
logger.info()

View File

@@ -11,5 +11,13 @@
"adult"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}
}

View File

@@ -1,18 +1,20 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import httptools
from core import servertools
from core import scrapertools
from core.item import Item
from platformcode import logger
from platformcode import config
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, action="lista", title="Novedades", fanart=item.fanart,
itemlist.append(Item(channel=item.channel, action="peliculas", title="Novedades", fanart=item.fanart,
url="http://es.pornhub.com/video?o=cm"))
itemlist.append(Item(channel=item.channel, action="categorias", title="Categorias", fanart=item.fanart,
url="http://es.pornhub.com/categories"))
@@ -26,7 +28,8 @@ def search(item, texto):
item.url = item.url % texto
try:
return lista(item)
return peliculas(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
@@ -50,13 +53,13 @@ def categorias(item):
else:
url = urlparse.urljoin(item.url, scrapedurl + "?o=cm")
scrapedtitle = scrapedtitle + " (" + cantidad + ")"
itemlist.append(Item(channel=item.channel, action="lista", title=scrapedtitle, url=url,
itemlist.append(Item(channel=item.channel, action="peliculas", title=scrapedtitle, url=url,
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail))
itemlist.sort(key=lambda x: x.title)
return itemlist
def lista(item):
def peliculas(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
@@ -67,11 +70,10 @@ def lista(item):
patron += '<var class="duration">([^<]+)</var>(.*?)</div>'
matches = re.compile(patron, re.DOTALL).findall(videodata)
for url, scrapedtitle, thumbnail, duration, scrapedhd in matches:
title = "(" + duration + ") " + scrapedtitle.replace("&amp;amp;", "&amp;")
scrapedhd = scrapertools.find_single_match(scrapedhd, '<span class="hd-thumbnail">(.*?)</span>')
if scrapedhd == 'HD':
title = "[COLOR yellow]" +duration+ "[/COLOR] " + "[COLOR red]" +scrapedhd+ "[/COLOR] "+scrapedtitle
else:
title = "[COLOR yellow]" + duration + "[/COLOR] " + scrapedtitle
if scrapedhd == 'HD':
title += ' [HD]'
url = urlparse.urljoin(item.url, url)
itemlist.append(
Item(channel=item.channel, action="play", title=title, url=url, fanart=thumbnail, thumbnail=thumbnail))
@@ -82,12 +84,19 @@ def lista(item):
if matches:
url = urlparse.urljoin(item.url, matches[0].replace('&amp;', '&'))
itemlist.append(
Item(channel=item.channel, action="lista", title=">> Página siguiente", fanart=item.fanart,
Item(channel=item.channel, action="peliculas", title=">> Página siguiente", fanart=item.fanart,
url=url))
return itemlist
def play(item):
logger.info(item)
itemlist = servertools.find_video_items(item.clone(url = item.url))
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '"defaultQuality":true,"format":"mp4","quality":"\d+","videoUrl":"(.*?)"'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl in matches:
url = scrapedurl.replace("\/", "/")
itemlist.append(item.clone(action="play", title=url, fulltitle = item.title, url=url))
return itemlist

View File

@@ -1,15 +0,0 @@
{
"id": "pornohdmega",
"name": "pornohdmega",
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "https://www.pornohdmega.com/wp-content/uploads/2018/11/dftyu.png",
"banner": "",
"categories": [
"adult"
],
"settings": [
]
}

View File

@@ -1,108 +0,0 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
host = 'https://www.pornohdmega.com'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/?order=recent"))
itemlist.append( Item(channel=item.channel, title="Mejor valorados" , action="lista", url=host + "/?order=top-rated"))
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/?order=most-viewed"))
itemlist.append( Item(channel=item.channel, title="Canal" , action="catalogo", url=host + "/categories/"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = host + "/?s=%s" % texto
try:
return lista(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<li><a href=\'([^\']+)\' title=\'([^\']+) Tag\'>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle in matches:
scrapedplot = ""
if not "tag" in scrapedurl:
scrapedurl = ""
thumbnail = ""
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
thumbnail=thumbnail , plot=scrapedplot) )
return itemlist
def catalogo(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<h2><a href="([^"]+)">([^<]+)</a></h2>.*?'
patron += '<strong>(\d+) Videos</strong>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle,cantidad in matches:
scrapedplot = ""
scrapedtitle = "%s (%s)" % (scrapedtitle,cantidad)
thumbnail = ""
itemlist.append( Item(channel=item.channel, action="lista", title=scrapedtitle, url=scrapedurl,
thumbnail=thumbnail , plot=scrapedplot) )
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<figure class="video-preview"><a href="([^"]+)".*?'
patron += '<img src="([^"]+)".*?'
patron += 'title="([^"]+)"'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedthumbnail,scrapedtitle in matches:
title = scrapedtitle
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl, thumbnail=thumbnail,
fanart=thumbnail, plot=plot,))
next_page = scrapertools.find_single_match(data, '<a class="nextpostslink" rel="next" href="([^"]+)"')
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
url=next_page) )
return itemlist
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
patron = '<iframe src="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for url in matches:
itemlist.append(item.clone(action="play", title= "%s", url=url))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
return itemlist

View File

@@ -1,17 +1,16 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
import re
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
from platformcode import logger
from platformcode import config
host = 'https://www.pornrewind.com'
# hacer funcionar conector Kt player
def mainlist(item):
logger.info()
itemlist = []

View File

@@ -2,12 +2,14 @@
import re
import urllib
import urlparse
from core import httptools
from core import scrapertools
from core.item import Item
from platformcode import config, logger
from platformcode import config
host = "https://www.porntrex.com"
perpage = 20
@@ -25,7 +27,6 @@ def mainlist(item):
itemlist.append(item.clone(action="categorias", title="Modelos",
url=host + "/models/?mode=async&function=get_block&block_id=list_models_models" \
"_list&sort_by=total_videos"))
itemlist.append(item.clone(action="categorias", title="Canal", url=host + "/channels/"))
itemlist.append(item.clone(action="playlists", title="Listas", url=host + "/playlists/"))
itemlist.append(item.clone(action="tags", title="Tags", url=host + "/tags/"))
itemlist.append(item.clone(title="Buscar...", action="search"))
@@ -58,14 +59,15 @@ def search(item, texto):
def lista(item):
logger.info()
itemlist = []
# Descarga la pagina
data = get_data(item.url)
action = "play"
if config.get_setting("menu_info", "porntrex"):
action = "menu_info"
# Quita las entradas, que no son private <div class="video-preview-screen video-item thumb-item private "
patron = '<div class="video-preview-screen video-item thumb-item ".*?'
patron += '<a href="([^"]+)".*?'
# Quita las entradas, que no son private
patron = '<div class="video-preview-screen video-item thumb-item ".*?<a href="([^"]+)".*?'
patron += 'data-src="([^"]+)".*?'
patron += 'alt="([^"]+)".*?'
patron += '<span class="quality">(.*?)<.*?'
@@ -118,25 +120,21 @@ def lista(item):
def categorias(item):
logger.info()
itemlist = []
# Descarga la pagina
data = get_data(item.url)
# Extrae las entradas
if "/channels/" in item.url:
patron = '<div class="video-item ">.*?<a href="([^"]+)" title="([^"]+)".*?src="([^"]+)".*?<li>([^<]+)<'
else:
patron = '<a class="item" href="([^"]+)" title="([^"]+)".*?src="([^"]+)".*?<div class="videos">([^<]+)<'
patron = '<a class="item" href="([^"]+)" title="([^"]+)".*?src="([^"]+)".*?<div class="videos">([^<]+)<'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, videos in matches:
if "go.php?" in scrapedurl:
scrapedurl = urllib.unquote(scrapedurl.split("/go.php?u=")[1].split("&")[0])
scrapedthumbnail = urllib.unquote(scrapedthumbnail.split("/go.php?u=")[1].split("&")[0])
scrapedthumbnail += "|Referer=https://www.porntrex.com/"
else:
scrapedurl = urlparse.urljoin(host, scrapedurl)
if not scrapedthumbnail.startswith("https"):
scrapedthumbnail = "https:%s" % scrapedthumbnail
scrapedthumbnail += "|Referer=https://www.porntrex.com/"
scrapedthumbnail = scrapedthumbnail.replace(" " , "%20")
if videos:
scrapedtitle = "%s (%s)" % (scrapedtitle, videos)
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
@@ -160,10 +158,7 @@ def playlists(item):
# Descarga la pagina
data = get_data(item.url)
# Extrae las entradas
patron = '<div class="item.*?'
patron += 'href="([^"]+)" title="([^"]+)".*?'
patron += 'data-original="([^"]+)".*?'
patron += '<div class="totalplaylist">([^<]+)<'
patron = '<div class="item.*?href="([^"]+)" title="([^"]+)".*?data-original="([^"]+)".*?<div class="totalplaylist">([^<]+)<'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, videos in matches:
if "go.php?" in scrapedurl:
@@ -173,8 +168,6 @@ def playlists(item):
scrapedurl = urlparse.urljoin(host, scrapedurl)
if not scrapedthumbnail.startswith("https"):
scrapedthumbnail = "https:%s" % scrapedthumbnail
scrapedthumbnail += "|Referer=https://www.porntrex.com/"
scrapedthumbnail = scrapedthumbnail.replace(" " , "%20")
if videos:
scrapedtitle = "%s [COLOR red](%s)[/COLOR]" % (scrapedtitle, videos)
itemlist.append(item.clone(action="videos", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
@@ -202,12 +195,7 @@ def videos(item):
if config.get_setting("menu_info", "porntrex"):
action = "menu_info"
# Extrae las entradas
# Quita las entradas, que no son private <div class="video-item private ">
patron = '<div class="video-item ".*?'
patron += 'href="([^"]+)".*?'
patron += 'title="([^"]+)".*?'
patron += 'src="([^"]+)"(.*?)<div class="durations">.*?'
patron += '</i>([^<]+)</div>'
patron = '<div class="video-item.*?href="([^"]+)".*?title="([^"]+)".*?src="([^"]+)"(.*?)<div class="durations">.*?</i>([^<]+)</div>'
matches = scrapertools.find_multiple_matches(data, patron)
count = 0
for scrapedurl, scrapedtitle, scrapedthumbnail, quality, duration in matches:
@@ -221,51 +209,60 @@ def videos(item):
scrapedurl = urlparse.urljoin(host, scrapedurl)
if not scrapedthumbnail.startswith("https"):
scrapedthumbnail = "https:%s" % scrapedthumbnail
scrapedthumbnail += "|Referer=https://www.porntrex.com/"
scrapedthumbnail = scrapedthumbnail.replace(" " , "%20")
if 'k4"' in quality:
quality = "4K"
scrapedtitle = "%s - [COLOR yellow]%s[/COLOR] %s" % (duration, quality, scrapedtitle)
else:
quality = scrapertools.find_single_match(quality, '<span class="quality">(.*?)<.*?')
scrapedtitle = "%s - [COLOR red]%s[/COLOR] %s" % (duration, quality, scrapedtitle)
if duration:
scrapedtitle = "%s - %s" % (duration, scrapedtitle)
if '>HD<' in quality:
scrapedtitle += " [COLOR red][HD][/COLOR]"
if len(itemlist) >= perpage:
break;
itemlist.append(item.clone(action=action, title=scrapedtitle, url=scrapedurl, contentThumbnail=scrapedthumbnail,
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail))
itemlist.append(item.clone(action=action, title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail, contentThumbnail=scrapedthumbnail,
fanart=scrapedthumbnail))
#Extrae la marca de siguiente página
if item.channel and len(itemlist) >= perpage:
itemlist.append( item.clone(title = "Página siguiente >>>", indexp = count + 1) )
return itemlist
def play(item):
logger.info()
itemlist = []
data = get_data(item.url)
patron = '(?:video_url|video_alt_url[0-9]*):\s*\'([^\']+)\'.*?'
patron += '(?:video_url_text|video_alt_url[0-9]*_text):\s*\'([^\']+)\''
patron = '(?:video_url|video_alt_url[0-9]*)\s*:\s*\'([^\']+)\'.*?(?:video_url_text|video_alt_url[0-9]*_text)\s*:\s*\'([^\']+)\''
matches = scrapertools.find_multiple_matches(data, patron)
scrapertools.printMatches(matches)
if not matches:
patron = '<iframe.*?height="(\d+)".*?video_url\s*:\s*\'([^\']+)\''
matches = scrapertools.find_multiple_matches(data, patron)
for url, quality in matches:
quality = quality.replace(" HD" , "").replace(" 4k", "")
if "https" in quality:
calidad = url
url = quality
quality = calidad + "p"
itemlist.append(['.mp4 %s [directo]' % quality, url])
if item.extra == "play_menu":
return itemlist, data
return itemlist
def menu_info(item):
logger.info()
itemlist = []
video_urls, data = play(item.clone(extra="play_menu"))
itemlist.append(item.clone(action="play", title="Ver -- %s" % item.title, video_urls=video_urls))
matches = scrapertools.find_multiple_matches(data, '<img class="thumb lazy-load" src="([^"]+)"')
matches = scrapertools.find_multiple_matches(data, '<img class="thumb lazy-load".*?data-original="([^"]+)"')
for i, img in enumerate(matches):
if i == 0:
continue
img = "https:" + img + "|Referer=https://www.porntrex.com/"
img = urlparse.urljoin(host, img)
img += "|Referer=https://www.porntrex.com/"
title = "Imagen %s" % (str(i))
itemlist.append(item.clone(action="", title=title, thumbnail=img, fanart=img))
return itemlist
@@ -324,4 +321,5 @@ def get_data(url_orig):
post = ""
else:
break
return response.data

View File

@@ -1,15 +0,0 @@
{
"id": "porntv",
"name": "porntv",
"active": true,
"adult": true,
"language": ["*"],
"thumbnail": "https://www.porntv.com/images/dart/logo.png",
"banner": "",
"categories": [
"adult"
],
"settings": [
]
}

View File

@@ -1,104 +0,0 @@
# -*- coding: utf-8 -*-
#------------------------------------------------------------
import urlparse,urllib2,urllib,re
import os, sys
from platformcode import config, logger
from core import scrapertools
from core.item import Item
from core import servertools
from core import httptools
host = 'https://www.porntv.com'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append( Item(channel=item.channel, title="Nuevos" , action="lista", url=host + "/videos/straight/all-recent.html"))
itemlist.append( Item(channel=item.channel, title="Mas vistos" , action="lista", url=host + "/videos/straight/all-view.html"))
itemlist.append( Item(channel=item.channel, title="Mejor valorada" , action="lista", url=host + "/videos/straight/all-rate.html"))
itemlist.append( Item(channel=item.channel, title="Mas popular" , action="lista", url=host + "/videos/straight/all-popular.html"))
itemlist.append( Item(channel=item.channel, title="Mas largos" , action="lista", url=host + "/videos/straight/all-length.html"))
itemlist.append( Item(channel=item.channel, title="Categorias" , action="categorias", url=host + "/categories/"))
itemlist.append( Item(channel=item.channel, title="Buscar", action="search"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "")
item.url = host + "/videos/straight/%s-recent.html" % texto
try:
return lista(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
data = scrapertools.find_single_match(data, '<h1>Popular Categories</h1>(.*?)<h1>Community</h1>')
patron = '<h2><a href="([^"]+)">([^<]+)</a>.*?'
patron += 'src="([^"]+)".*?'
patron += '<span class="contentquantity">([^<]+)</span>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedtitle,scrapedthumbnail,cantidad in matches:
scrapedplot = ""
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
title = scrapedtitle + " " + cantidad
itemlist.append( Item(channel=item.channel, action="lista", title=title, url=scrapedurl,
fanart=scrapedthumbnail, thumbnail=scrapedthumbnail , plot=scrapedplot) )
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>|<br/>", "", data)
patron = '<div class="item" style="width: 320px">.*?'
patron += '<a href="([^"]+)".*?'
patron += '<img src="([^"]+)".*?'
patron += '>(.*?)<div class="trailer".*?'
patron += 'title="([^"]+)".*?'
patron += 'clock"></use></svg>([^<]+)</span>'
matches = re.compile(patron,re.DOTALL).findall(data)
for scrapedurl,scrapedthumbnail,quality,scrapedtitle,scrapedtime in matches:
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + scrapedtitle
if "flag-hd" in quality:
title = "[COLOR yellow]" + scrapedtime + "[/COLOR] " + "[COLOR red]" + "HD" + "[/COLOR] " + scrapedtitle
scrapedurl = urlparse.urljoin(item.url,scrapedurl)
thumbnail = scrapedthumbnail
plot = ""
itemlist.append( Item(channel=item.channel, action="play", title=title, url=scrapedurl,
fanart=thumbnail, thumbnail=thumbnail, plot=plot, contentTitle = scrapedtitle))
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" class="next"')
if next_page:
next_page = urlparse.urljoin(item.url,next_page)
itemlist.append( Item(channel=item.channel, action="lista", title="Página Siguiente >>", text_color="blue",
url=next_page) )
return itemlist
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
data = scrapertools.find_single_match(data, 'sources: \[(.*?)\]')
patron = 'file: "([^"]+)",.*?label: "([^"]+)",'
matches = re.compile(patron,re.DOTALL).findall(data)
for url,quality in matches:
itemlist.append(["%s %s [directo]" % (quality, url), url])
return itemlist

Some files were not shown because too many files have changed in this diff Show More