This commit is contained in:
Whiplash
2019-11-07 21:56:58 +01:00
56 changed files with 1308 additions and 2118 deletions

View File

@@ -0,0 +1,26 @@
---
name: Segnala Problemi ad un Canale
about: Invio segnalazione per un canale non funzionante
title: 'Inserisci il nome del canale'
labels: Problema Canale
assignees: ''
---
**Per poter scrivere o allegare file nella pagina devi:**
- cliccare sui [ ... ] in alto a destra della scheda
- Edit. Da questo momento puoi scrivere e/o inviare file.
Inserisci il nome del canale
- Il tipo di problema riscontrato, sii il più esauriente possibile. Che azione ha portato all'errore
Es. non riesco ad aggiungere film nella videoteca, ne dal menu contestuale, ne dalla voce in fondo alla lista dei server.
Se faccio ricerca globale, non riesco ad aggiungere film/serie/anime nella videoteca o a scaricare il video.
- Allega il file di log nella sua completezza. Non cancellarne delle parti.

View File

@@ -0,0 +1,19 @@
---
name: Segnala Problemi ad un Server
about: Invio segnalazione per un server non funzionante
title: 'Inserisci il nome del server'
labels: Problema Server
assignees: ''
---
**Per poter scrivere o allegare file nella pagina devi:**
- cliccare sui [ ... ] in alto a destra della scheda
- Edit. Da questo momento puoi scrivere e/o inviare file.
Inserisci il nome del server che indica problemi e se il problema è circoscritto ad un solo canale, indicalo
- Allega il file di log nella sua completezza. Non cancellarne delle parti.

View File

@@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<addon id="plugin.video.kod" name="Kodi on Demand BETA" version="0.4.2" provider-name="KOD Team">
<addon id="plugin.video.kod" name="Kodi on Demand BETA" version="0.5" provider-name="KOD Team">
<requires>
<import addon="xbmc.python" version="2.1.0"/>
<import addon="script.module.libtorrent" optional="true"/>
@@ -20,7 +20,12 @@
<screenshot>resources/media/themes/ss/2.png</screenshot>
<screenshot>resources/media/themes/ss/3.png</screenshot>
</assets>
<news>Benvenuto su KOD!</news>
<news>-Ridefinito il modo in cui vengono scritti i canali, per assicurare migliore stabilità, debuggabilità e coerenza
-Riscritti di conseguenza molti canali, corregendo di fatto moltissimi problemi che avete segnalato
-Quando aggiungi in videoteca da fonti in più lingue (ita/sub ita) o più qualità, ti viene chiesto quale tipo di fonte vuoi.
-Per gli amanti degli anime, aggiunto VVVVID (senza bisogno di account!)
-Aggiunti i server supervideo e hdload, fixato wstream
-migliorie varie</news>
<description lang="it">Naviga velocemente sul web e guarda i contenuti presenti</description>
<disclaimer>[COLOR red]The owners and submitters to this addon do not host or distribute any of the content displayed by these addons nor do they have any affiliation with the content providers.[/COLOR]
[COLOR yellow]Kodi © is a registered trademark of the XBMC Foundation. We are not connected to or in any other way affiliated with Kodi, Team Kodi, or the XBMC Foundation. Furthermore, any software, addons, or products offered by us will receive no support in official Kodi channels, including the Kodi forums and various social networks.[/COLOR]</disclaimer>

View File

@@ -1,55 +1,52 @@
{
"altadefinizione01": "https://www.altadefinizione01.cc",
"altadefinizione01_club": "https://www.altadefinizione01.cc",
"altadefinizione01_link": "http://altadefinizione01.gift",
"altadefinizioneclick": "https://altadefinizione.cloud",
"altadefinizionehd": "https://altadefinizione.wtf",
"altadefinizione01": "https://www.altadefinizione01.cc",
"altadefinizione01_club": "https://www.altadefinizione01.cc",
"altadefinizione01_link": "http://altadefinizione01.gift",
"altadefinizioneclick": "https://altadefinizione.cloud",
"animeforce": "https://ww1.animeforce.org",
"animeleggendari": "https://animepertutti.com",
"animespace": "http://www.animespace.tv",
"animestream": "https://www.animeworld.it",
"animesubita": "http://www.animesubita.org",
"animetubeita": "http://www.animetubeita.com",
"animeworld": "https://www.animeworld.tv",
"casacinema": "https://www.casacinema.uno",
"casacinemainfo": "https://www.casacinema.info",
"cb01anime": "https://www.cineblog01.ink",
"cinemalibero": "https://www.cinemalibero.best",
"documentaristreamingda": "https://documentari-streaming-da.com",
"dreamsub": "https://www.dreamsub.stream",
"eurostreaming": "https://eurostreaming.pink",
"fastsubita": "http://fastsubita.com",
"filmgratis": "https://www.filmaltadefinizione.net",
"filmigratis": "https://filmigratis.org",
"filmpertutti": "https://www.filmpertutti.link",
"filmsenzalimiti": "https://filmsenzalimiti.best",
"filmsenzalimiticc": "https://www.filmsenzalimiti.press",
"filmstreaming01": "https://filmstreaming01.com",
"filmstreamingita": "http://filmstreamingita.live",
"animeleggendari": "https://animepertutti.com",
"animespace": "http://www.animespace.tv",
"animestream": "https://www.animeworld.it",
"animesubita": "http://www.animesubita.org",
"animetubeita": "http://www.animetubeita.com",
"animeworld": "https://www.animeworld.tv",
"casacinema": "https://www.casacinema.uno",
"casacinemainfo": "https://www.casacinema.info",
"cb01anime": "https://www.cineblog01.ink",
"cinemalibero": "https://www.cinemalibero.best",
"documentaristreamingda": "https://documentari-streaming-da.com",
"dreamsub": "https://www.dreamsub.stream",
"eurostreaming": "https://eurostreaming.pink",
"fastsubita": "http://fastsubita.com",
"filmgratis": "https://www.filmaltadefinizione.net",
"filmigratis": "https://filmigratis.org",
"filmpertutti": "https://www.filmpertutti.link",
"filmsenzalimiti": "https://filmsenzalimiti.best",
"filmsenzalimiticc": "https://www.filmsenzalimiti.press",
"filmstreaming01": "https://filmstreaming01.com",
"guardafilm": "http://www.guardafilm.top",
"guardarefilm": "https://www.guardarefilm.red",
"guardaserie_stream": "https://guardaserie.co",
"guardaseriecc": "https://guardaserie.site",
"guardaserieclick": "https://www.guardaserie.media",
"guardogratis": "https://guardogratis.net",
"ilgeniodellostreaming": "https://igds.red",
"italiafilm": "https://www.italia-film.pw",
"italiafilmhd": "https://italiafilm.info",
"italiaserie": "https://italiaserie.org",
"itastreaming": "https://itastreaming.film",
"mondolunatico": "http://mondolunatico.org",
"mondolunatico2": "https://mondolunatico.org:443/stream",
"mondoserietv": "https://mondoserietv.com",
"piratestreaming": "https://www.piratestreaming.media",
"polpotv": "https://polpo.tv",
"seriehd": "https://www.seriehd.moda",
"serietvonline": "https://serietvonline.best",
"serietvsubita": "http://serietvsubita.xyz",
"serietvu": "https://www.serietvu.club",
"streamingaltadefinizione": "https://www.popcornstream.best",
"streamtime": "https://t.me/s/StreamTime",
"tantifilm": "https://www.tantifilm.eu",
"toonitalia": "https://toonitalia.org",
"vedohd": "https://vedohd.video",
"guardarefilm": "https://www.guardarefilm.red",
"guardaserie_stream": "https://guardaserie.co",
"guardaseriecc": "https://guardaserie.site",
"guardaserieclick": "https://www.guardaserie.media",
"guardogratis": "https://guardogratis.net",
"ilgeniodellostreaming": "https://igds.red",
"italiafilm": "https://www.italia-film.pw",
"italiaserie": "https://italiaserie.org",
"itastreaming": "https://itastreaming.film",
"mondolunatico": "http://mondolunatico.org",
"mondolunatico2": "https://mondolunatico.org:443/stream",
"mondoserietv": "https://mondoserietv.com",
"piratestreaming": "https://www.piratestreaming.media",
"polpotv": "https://polpo.tv",
"seriehd": "https://www.seriehd.moda",
"serietvonline": "https://serietvonline.best",
"serietvsubita": "http://serietvsubita.xyz",
"serietvu": "https://www.serietvu.club",
"streamingaltadefinizione": "https://www.popcornstream.best",
"streamtime": "https://t.me/s/StreamTime",
"tantifilm": "https://www.tantifilm.eu",
"toonitalia": "https://toonitalia.org",
"vedohd": "https://vedohd.video",
"vvvvid": "https://www.vvvvid.it"
}
}

267
channels/.cinemalibero.py Normal file
View File

@@ -0,0 +1,267 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per 'cinemalibero'
"""
Questi sono commenti per i beta-tester.
Su questo canale, nelle categorie:
-'Ricerca Globale'
- Novità, voce interna al canale
- Nella lista anime
non saranno presenti le voci:
- 'Aggiungi alla Videoteca'
- 'Scarica Film'/'Scarica Serie',
Inoltre nella lista Anime non è presente la voce rinumerazione!
dunque, la loro assenza, nel Test, NON dovrà essere segnalata come ERRORE.
Novità ( globale ), presenti solo i film:
- film ( 20 titoli ) della pagina https://www.cinemalibero.best/category/film/
Avvisi:
- Eventuali avvisi per i tester
Ulteriori info:
"""
import re
# per l'uso dei decoratori, per i log, e funzioni per siti particolari
from core import support
# se non si fa uso di findhost()
from platformcode import config
# in caso di necessità
from core import scrapertoolsV2, httptools#, servertools
from core.item import Item # per newest
#from lib import unshortenit
# se necessaria la variabile __channel__
# da cancellare se non utilizzata
__channel__ = "cinemalibero"
# da cancellare se si utilizza findhost()
host = config.get_channel_url('cinemalibero')
headers = [['Referer', host]]
list_servers = ['akstream', 'wstream', 'openload', 'streamango']
list_quality = ['default']
### fine variabili
#### Inizio delle def principali ###
@support.menu
def mainlist(item):
support.log(item)
film = ['/category/film/',
('Generi', ['', 'genres', 'genres']),
]
# Voce SERIE, puoi solo impostare l'url
tvshow = ['/category/serie-tv/',
('Novità', ['/aggiornamenti-serie-tv/', 'peliculas', 'update'])
]
# Voce ANIME, puoi solo impostare l'url
Anime = [('Anime', ['/category/anime-giapponesi/', 'peliculas', 'anime', 'tvshow']), # url per la voce Anime, se possibile la pagina con titoli di anime
## #Voce Menu,['url','action','args',contentType]
## ('Novità', ['', '', '']),
## ('In Corso',['', '', '', '']),
## ('Ultimi Episodi',['', '', '', '']),
## ('Ultime Serie',['', '', '', ''])
]
search = ''
return locals()
@support.scrape
def peliculas(item):
support.log(item)
#support.dbg() # decommentare per attivare web_pdb
if item.args == 'search':
patron = r'href="(?P<url>[^"]+)".+?url\((?P<thumb>[^\)]+)\)">.+?class="titolo">(?P<title>[^<]+)<'
patronBlock = r'style="color: #2C3549 !important;" class="fon my-3"><small>.+?</small></h1>(?P<block>.*?)<div class="bg-dark ">'
action = 'select'
else:
if item.contentType == 'tvshow':
if item.args == 'update':
patron = r'<div class="card-body p-0">\s<a href="(?P<url>[^"]+)".+?url\((?P<thumb>.+?)\)">\s<div class="titolo">(?P<title>.+?)(?: &#8211; Serie TV)?(?:\([sSuUbBiItTaA\-]+\))?[ ]?(?P<year>\d{4})?</div>[ ]<div class="genere">(?:[\w]+?\.?\s?[\s|S]?[\dx\-S]+?\s\(?(?P<lang>[iItTaA]+|[sSuUbBiItTaA\-]+)\)?\s?(?P<quality>[HD]+)?|.+?\(?(?P<lang2>[sSuUbBiItTaA\-]+)?\)?</div>)'
action = 'select'
def itemHook(item):
if item.lang2:
if len(item.lang2) <3:
item.lang2 = 'ITA'
item.contentLanguage = item.lang2
item.title += support.typo(item.lang2, '_ [] color kod')
return item
elif item.args == 'anime':# or 'anime' in item.url:
patron = r'<div class="card-body p-0"> <a href="(?P<url>[^"]+)".+?url\((?P<thumb>.+?)\)">[^>]+>[^>]+>[^>]+>(?:[ ](?P<rating>\d+.\d+))?[^>]+>[^>]+>(?P<title>.+?)(?:\([sSuUbBiItTaA\-]+\))?\s?(?:(?P<year>\d{4}|\(\d{4}\)|)?)?<[^>]+>(?:<div class="genere">.+?(?:\()?(?P<lang>ITA|iTA|Sub)(?:\))?)?'
action = 'select'
else:
patron = r'<div class="card-body p-0">\s?<a href="(?P<url>[^"]+)".+?url\((?P<thumb>.+?)\)">[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)</div>(?:<div class="genere">(?:[ |\w]+?(?:[\dx\-]+)?[ ](?:\()?(?P<lang>[sSuUbB]+|[iItTaA]+)(?:\))?\s?(?P<quality>[\w]+)?\s?|[\s|S]?[\dx\-]+\s[|]?\s?(?:[\w]+)?\s?\(?(\4[sSuUbB]+)?\)?)?.+?</div>)?'
action = 'episodios'
elif item.contentType == 'movie':
action = 'findvideos'
patron = r'href="(?P<url>[^"]+)".+?url\((?P<thumb>.+?)\)">[^>]+>[^>]+>[^>]+>(?:[ ](?P<rating>\d+.\d+))?[^>]+>[^>]+>(?P<title>.+?)(?:\[(?P<lang>Sub-iTA|Sub-ITA|Sub)\])?[ ]\((?P<year>\d+)\)</div>(?:<div class="genere">(?P<quality>[^<]+)<)?'
patronBlock = r'<h1(?: style="color: #2C3549 !important; text-transform: uppercase;"| style="text-transform: uppercase; color: #2C3549 !important;"| style="color: #2C3549 !important; text-transform: uppercase;" style="text-shadow: 1px 1px 1px #FF8C00; color:#FF8C00;"| style="text-shadow: 1px 1px 1px #0f0f0f;" class="darkorange"| style="color:#2C3549 !important;")>.+?</h1>(?P<block>.*?)<div class=(?:"container"|"bg-dark ")>'
patronNext = '<a class="next page-numbers".*?href="([^"]+)">'
## debug = True # True per testare le regex sul sito
return locals()
@support.scrape
def episodios(item):
support.log(item)
#support.dbg()
data = item.data1
if item.args == 'anime':
item.contentType = 'tvshow'
blacklist = ['Clipwatching', 'Verystream', 'Easybytez']
patron = r'(?:href="[ ]?(?P<url>[^"]+)"[^>]+>(?P<title>[^<]+)<|(?P<episode>\d+(?:&#215;|×)?\d+\-\d+|\d+(?:&#215;|×)\d+)[;]?(?:(\4[^<]+)(\2.*?)|(\2[ ])(?:<(\3.*?)))(?:</a><br />|</a></p>))'
#patron = r'<a target=.+?href="(?P<url>[^"]+)"[^>]+>(?P<title>(Epis|).+?(?P<episode>\d+)?)(?:\((?P<lang>Sub ITA)\))?</a>(?:<br />)?'
patronBlock = r'(?:class="txt_dow">Streaming:(?P<block>.*?)at-below-post)'
else:
patron = r'(?P<episode>\d+(?:&#215;|×)?\d+\-\d+|\d+(?:&#215;|×)\d+)[;]?[ ]?(?:(?P<title>[^<]+)(?P<url>.*?)|(\2[ ])(?:<(\3.*?)))(?:</a><br />|</a></p>)'
## patron = r'<a target=.+?href="(?P<url>[^"]+)"[^>]+>(?P<title>Epis.+?(\d+)?)(?:\((?P<lang>Sub ITA)\))?</a><br />'
patronBlock = r'<p><strong>(?P<block>(?:.+?[Ss]tagione.+?(?P<lang>iTA|ITA|Sub-ITA|Sub-iTA))?(?:|.+?|</strong>)(/?:</span>)?</p>.*?</p>)'
item.contentType = 'tvshow'
action = 'findvideos'
debug = True
return locals()
@support.scrape
def genres(item):
support.log(item)
#support.dbg()
action = 'peliculas'
#blacklist = ['']
patron = r'<a class="dropdown-item" href="(?P<url>[^"]+)" title="(?P<title>[A-z]+)"'
return locals()
############## Fine ordine obbligato
## Def ulteriori
def select(item):
support.log('select --->', item)
#support.dbg()
data = httptools.downloadpage(item.url, headers=headers).data
block = scrapertoolsV2.find_single_match(data, r'<div class="col-md-8 bg-white rounded-left p-5"><div>(.*?)<div style="margin-left: 0.5%; color: #FFF;">')
if re.findall('rel="category tag">serie', data, re.IGNORECASE):
support.log('select = ### è una serie ###')
return episodios(Item(channel=item.channel,
title=item.title,
fulltitle=item.fulltitle,
url=item.url,
args='serie',
contentType='tvshow',
data1 = data
))
elif re.findall('rel="category tag">anime', data, re.IGNORECASE):
if re.findall('episodio', block, re.IGNORECASE) or re.findall('stagione', data, re.IGNORECASE) or re.findall('numero stagioni', data, re.IGNORECASE):
support.log('select = ### è un anime ###')
return episodios(Item(channel=item.channel,
title=item.title,
fulltitle=item.fulltitle,
url=item.url,
args='anime',
contentType='tvshow',
data1 = data
))
else:
support.log('select = ### è un film ###')
return findvideos(Item(channel=item.channel,
title=item.title,
fulltitle=item.fulltitle,
url=item.url,
args = '',
contentType='movie',
#data = data
))
else:
support.log('select = ### è un film ###')
return findvideos(Item(channel=item.channel,
title=item.title,
fulltitle=item.fulltitle,
url=item.url,
contentType='movie',
#data = data
))
############## Fondo Pagina
# da adattare al canale
def search(item, text):
support.log('search', item)
itemlist = []
text = text.replace(' ', '+')
item.url = host + "/?s=" + text
item.contentType = item.contentType
try:
item.args = 'search'
item.contentType = 'episode' # non fa uscire le voci nel context menu
return peliculas(item)
# Se captura la excepcion, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
support.log('search log:', line)
return []
# da adattare al canale
# inserire newest solo se il sito ha la pagina con le ultime novità/aggiunte
# altrimenti NON inserirlo
def newest(categoria):
support.log('newest ->', categoria)
itemlist = []
item = Item()
try:
if categoria == 'peliculas':
item.url = host+'/category/film/'
item.contentType = 'movie'
## item.action = 'peliculas'
## itemlist = peliculas(item)
elif categoria == 'series':
item.contentType = 'tvshow'
item.args = 'update'
item.url = host+'/aggiornamenti-serie-tv/'
item.action = 'peliculas'
itemlist = peliculas(item)
if itemlist[-1].action == 'peliculas':
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.log('newest log: ', {0}.format(line))
return []
return itemlist
#support.server(item, data='', itemlist=[], headers='', AutoPlay=True, CheckLinks=True)
def findvideos(item):
support.log('findvideos ->', item)
#return support.server(item, headers=headers)
support.log(item)
if item.contentType == 'movie':
return support.server(item)
else:
return support.server(item, data= item.url)

View File

@@ -27,14 +27,12 @@ def findhost():
host = scrapertoolsV2.find_single_match(data, '<div class="elementor-button-wrapper"> <a href="([^"]+)"')
headers = [['Referer', host]]
findhost()
list_servers = ['verystream','openload','rapidvideo','streamango']
list_quality = ['default']
@support.menu
def mainlist(item):
findhost()
film = [
('Al Cinema', ['/cinema/', 'peliculas', 'pellicola']),
('Ultimi Aggiornati-Aggiunti', ['','peliculas', 'update']),
@@ -49,6 +47,7 @@ def mainlist(item):
@support.scrape
def peliculas(item):
support.log('peliculas',item)
findhost()
## deflang = 'ITA'
action="findvideos"
@@ -73,7 +72,7 @@ def peliculas(item):
@support.scrape
def genres(item):
support.log('genres',item)
if item.args != 'orderalf': action = "peliculas"
else: action = 'orderalf'
@@ -94,7 +93,7 @@ def genres(item):
@support.scrape
def orderalf(item):
support.log('orderalf',item)
action= 'findvideos'
patron = r'<td class="mlnh-thumb"><a href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"'\
'.+?[^>]+>[^>]+ [^>]+[^>]+ [^>]+>(?P<title>[^<]+).*?[^>]+>(?P<year>\d{4})<'\
@@ -106,7 +105,8 @@ def orderalf(item):
def search(item, text):
support.log(item, text)
findhost()
itemlist = []
text = text.replace(" ", "+")
item.url = host + "/index.php?do=search&story=%s&subaction=search" % (text)
@@ -122,6 +122,7 @@ def search(item, text):
def newest(categoria):
support.log(categoria)
findhost()
itemlist = []
item = Item()
try:

View File

@@ -1,70 +0,0 @@
{
"id": "altadefinizionehd",
"name": "AltadefinizioneHD",
"active": false,
"adult": false,
"language": ["ita"],
"thumbnail": "https://altadefinizione.doctor/wp-content/uploads/2019/02/logo.png",
"bannermenu": "https://altadefinizione.doctor/wp-content/uploads/2019/02/logo.png",
"categories": ["tvshow","movie"],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Includi in Ricerca Globale",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Includi in Novità - Film",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_series",
"type": "bool",
"label": "Includi in Novità - Serie TV",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_italiano",
"type": "bool",
"label": "Includi in Novità - Italiano",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "checklinks",
"type": "bool",
"label": "Verifica se i link esistono",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "checklinks_number",
"type": "list",
"label": "Numero de link da verificare",
"default": 1,
"enabled": true,
"visible": "eq(-1,true)",
"lvalues": [ "1", "3", "5", "10" ]
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostra link in lingua...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": ["Non filtrare","IT"]
}
]
}

View File

@@ -1,264 +0,0 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per Altadefinizione HD
# ----------------------------------------------------------
import re
from channelselector import thumb
from core import httptools, scrapertools, servertools, tmdb
from core.item import Item
from platformcode import logger, config
from specials import autoplay
__channel__ = 'altadefinizionehd'
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
list_servers = ['openload']
list_quality = ['default']
def mainlist(item):
logger.info("[altadefinizionehd.py] mainlist")
autoplay.init(item.channel, list_servers, list_quality)
itemlist = [Item(channel=item.channel,
action="video",
title="[B]Film[/B]",
url=host + '/movies/',
thumbnail=NovitaThumbnail,
fanart=FilmFanart),
Item(channel=item.channel,
action="menu",
title="[B] > Film per Genere[/B]",
url=host,
extra='GENERE',
thumbnail=NovitaThumbnail,
fanart=FilmFanart),
Item(channel=item.channel,
action="menu",
title="[B] > Film per Anno[/B]",
url=host,
extra='ANNO',
thumbnail=NovitaThumbnail,
fanart=FilmFanart),
Item(channel=item.channel,
action="video",
title="Film Sub-Ita",
url=host + "/genre/sub-ita/",
thumbnail=NovitaThumbnail,
fanart=FilmFanart),
Item(channel=item.channel,
action="video",
title="Film Rip",
url=host + "/genre/dvdrip-bdrip-brrip/",
thumbnail=NovitaThumbnail,
fanart=FilmFanart),
Item(channel=item.channel,
action="video",
title="Film al Cinema",
url=host + "/genre/cinema/",
thumbnail=NovitaThumbnail,
fanart=FilmFanart),
Item(channel=item.channel,
action="search",
extra="movie",
title="[COLOR blue]Cerca Film...[/COLOR]",
thumbnail=CercaThumbnail,
fanart=FilmFanart)]
autoplay.show_option(item.channel, itemlist)
itemlist = thumb(itemlist)
return itemlist
def menu(item):
logger.info("[altadefinizionehd.py] menu")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
logger.info("[altadefinizionehd.py] DATA"+data)
patron = r'<li id="menu.*?><a href="#">FILM PER ' + item.extra + r'<\/a><ul class="sub-menu">(.*?)<\/ul>'
logger.info("[altadefinizionehd.py] BLOCK"+patron)
block = scrapertools.find_single_match(data, patron)
logger.info("[altadefinizionehd.py] BLOCK"+block)
patron = r'<li id=[^>]+><a href="(.*?)">(.*?)<\/a><\/li>'
matches = re.compile(patron, re.DOTALL).findall(block)
for url, title in matches:
itemlist.append(
Item(channel=item.channel,
action='video',
title=title,
url=url))
return itemlist
def newest(categoria):
logger.info("[altadefinizionehd.py] newest" + categoria)
itemlist = []
item = Item()
try:
if categoria == "peliculas":
item.url = host
item.action = "video"
itemlist = video(item)
if itemlist[-1].action == "video":
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def video(item):
logger.info("[altadefinizionehd.py] video")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
logger.info("[altadefinizionehd.py] Data" +data)
if 'archive-content' in data:
regex = r'<div id="archive-content".*?>(.*?)<div class="pagination'
else:
regex = r'<div class="items".*?>(.*?)<div class="pagination'
block = scrapertools.find_single_match(data, regex)
logger.info("[altadefinizionehd.py] Block" +block)
patron = r'<article .*?class="item movies">.*?<img src="([^"]+)".*?<span class="quality">(.*?)<\/span>.*?<a href="([^"]+)">.*?<h4>([^<]+)<\/h4>(.*?)<\/article>'
matches = re.compile(patron, re.DOTALL).findall(block)
for scrapedthumb, scrapedquality, scrapedurl, scrapedtitle, scrapedinfo in matches:
title = scrapedtitle + " [" + scrapedquality + "]"
patron = r'IMDb: (.*?)<\/span> <span>(.*?)<\/span>.*?"texto">(.*?)<\/div>'
matches = re.compile(patron, re.DOTALL).findall(scrapedinfo)
logger.info("[altadefinizionehd.py] MATCHES" + str(matches))
for rating, year, plot in matches:
infoLabels = {}
infoLabels['Year'] = year
infoLabels['Rating'] = rating
infoLabels['Plot'] = plot
itemlist.append(
Item(channel=item.channel,
action="findvideos",
contentType="movie",
title=title,
fulltitle=scrapedtitle,
infoLabels=infoLabels,
url=scrapedurl,
thumbnail=scrapedthumb))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
patron = '<a class='+ "'arrow_pag'" + ' href="([^"]+)"'
next_page = scrapertools.find_single_match(data, patron)
if next_page != "":
itemlist.append(
Item(channel=item.channel,
action="video",
title="[COLOR blue]" + config.get_localized_string(30992) + "[/COLOR]",
url=next_page,
thumbnail=thumb()))
return itemlist
def search(item, texto):
logger.info("[altadefinizionehd.py] init texto=[" + texto + "]")
item.url = host + "/?s=" + texto
return search_page(item)
def search_page(item):
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
patron = r'<img src="([^"]+)".*?.*?<a href="([^"]+)">(.*?)<\/a>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
itemlist.append(
Item(channel=item.channel,
action="findvideos",
title=scrapedtitle,
fulltitle=scrapedtitle,
url=scrapedurl,
thumbnail=scrapedthumbnail))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
patron = '<a class='+ "'arrow_pag'" + ' href="([^"]+)"'
next_page = scrapertools.find_single_match(data, patron)
if next_page != "":
itemlist.append(
Item(channel=item.channel,
action="search_page",
title="[COLOR blue]" + config.get_localized_string(30992) + "[/COLOR]",
url=next_page,
thumbnail=thumb()))
return itemlist
def findvideos(item):
data = httptools.downloadpage(item.url).data
patron = r"<li id='player-.*?'.*?class='dooplay_player_option'\sdata-type='(.*?)'\sdata-post='(.*?)'\sdata-nume='(.*?)'>.*?'title'>(.*?)</"
matches = re.compile(patron, re.IGNORECASE).findall(data)
itemlist = []
for scrapedtype, scrapedpost, scrapednume, scrapedtitle in matches:
itemlist.append(
Item(channel=item.channel,
action="play",
fulltitle=item.title + " [" + scrapedtitle + "]",
show=scrapedtitle,
title=item.title + " [COLOR blue][" + scrapedtitle + "][/COLOR]",
url=host + "/wp-admin/admin-ajax.php",
post=scrapedpost,
server=scrapedtitle,
nume=scrapednume,
type=scrapedtype,
extra=item.extra,
folder=True))
autoplay.start(itemlist, item)
return itemlist
def play(item):
import urllib
payload = urllib.urlencode({'action': 'doo_player_ajax', 'post': item.post, 'nume': item.nume, 'type': item.type})
data = httptools.downloadpage(item.url, post=payload).data
patron = r"<iframe.*src='(([^']+))'\s"
matches = re.compile(patron, re.IGNORECASE).findall(data)
url = matches[0][0]
url = url.strip()
data = httptools.downloadpage(url, headers=headers).data
itemlist = servertools.find_video_items(data=data)
return itemlist
NovitaThumbnail = "https://superrepo.org/static/images/icons/original/xplugin.video.moviereleases.png.pagespeed.ic.j4bhi0Vp3d.png"
GenereThumbnail = "https://farm8.staticflickr.com/7562/15516589868_13689936d0_o.png"
FilmFanart = "https://superrepo.org/static/images/fanart/original/script.artwork.downloader.jpg"
CercaThumbnail = "http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search"
CercaFanart = "https://i.ytimg.com/vi/IAlbvyBdYdY/maxresdefault.jpg"
ListTxt = "[COLOR orange]Torna a video principale [/COLOR]"
AvantiTxt = config.get_localized_string(30992)
AvantiImg = "http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png"
thumbnail = "http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"

View File

@@ -1,36 +1,11 @@
{
"id": "animetubeita",
"name": "Animetubeita",
"name": "AnimeTubeITA",
"active": true,
"adult": false,
"language": ["ita"],
"thumbnail": "http:\/\/i.imgur.com\/rQPx1iQ.png",
"bannermenu": "http:\/\/i.imgur.com\/rQPx1iQ.png",
"categories": ["anime"],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Includi ricerca globale",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "include_in_newest_anime",
"type": "bool",
"label": "Includi in Novità - Anime",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "include_in_newest_italiano",
"type": "bool",
"label": "Includi in Novità - Italiano",
"default": true,
"enabled": true,
"visible": true
}
]
"language": ["sub-ita"],
"thumbnail": "animetubeita.png",
"bannermenu": "animetubeita.png",
"categories": ["anime","vos"],
"settings": []
}

View File

@@ -1,364 +1,138 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Ringraziamo Icarus crew
# ----------------------------------------------------------
# Canale per animetubeita
# ----------------------------------------------------------
import re
import urllib
from core import httptools, scrapertools, tmdb
from core.item import Item
from platformcode import logger, config
from core import support
__channel__ = "animetubeita"
host = config.get_channel_url(__channel__)
hostlista = host + "/lista-anime/"
hostgeneri = host + "/generi/"
hostcorso = host + "/category/serie-in-corso/"
host = support.config.get_channel_url(__channel__)
def mainlist(item):
log("animetubeita", "mainlist", item.channel)
itemlist = [Item(channel=item.channel,
action="lista_home",
title="[COLOR azure]Home[/COLOR]",
url=host,
thumbnail=AnimeThumbnail,
fanart=AnimeFanart),
# Item(channel=item.channel,
# action="lista_anime",
# title="[COLOR azure]A-Z[/COLOR]",
# url=hostlista,
# thumbnail=AnimeThumbnail,
# fanart=AnimeFanart),
Item(channel=item.channel,
action="lista_genere",
title="[COLOR azure]Genere[/COLOR]",
url=hostgeneri,
thumbnail=CategoriaThumbnail,
fanart=CategoriaFanart),
Item(channel=item.channel,
action="lista_in_corso",
title="[COLOR azure]Serie in Corso[/COLOR]",
url=hostcorso,
thumbnail=CategoriaThumbnail,
fanart=CategoriaFanart),
Item(channel=item.channel,
action="search",
title="[COLOR lime]Cerca...[/COLOR]",
url=host + "/?s=",
thumbnail=CercaThumbnail,
fanart=CercaFanart)]
return itemlist
def lista_home(item):
log("animetubeita", "lista_home", item.channel)
itemlist = []
patron = '<h2 class="title"><a href="(.*?)" rel="bookmark" title=".*?">.*?<img.*?src="(.*?)".*?<strong>Titolo</strong></td>.*?<td>(.*?)</td>.*?<td><strong>Trama</strong></td>.*?<td>(.*?)</'
for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedplot in scrapedAll(item.url, patron):
title = scrapertools.decodeHtmlentities(scrapedtitle)
title = title.split("Sub")[0]
fulltitle = re.sub(r'[Ee]pisodio? \d+', '', title)
scrapedplot = scrapertools.decodeHtmlentities(scrapedplot)
itemlist.append(
Item(channel=item.channel,
action="dl_s",
contentType="tvshow",
title="[COLOR azure]" + title + "[/COLOR]",
fulltitle=fulltitle,
url=scrapedurl,
thumbnail=scrapedthumbnail,
fanart=scrapedthumbnail,
show=fulltitle,
plot=scrapedplot))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
# Paginazione
# ===========================================================
data = httptools.downloadpage(item.url).data
patron = '<link rel="next" href="(.*?)"'
next_page = scrapertools.find_single_match(data, patron)
if next_page != "":
itemlist.append(
Item(channel=item.channel,
action="lista_home",
title=AvantiTxt,
url=next_page,
thumbnail=AvantiImg,
folder=True))
# ===========================================================
return itemlist
# def lista_anime(item):
# log("animetubeita", "lista_anime", item.channel)
# itemlist = []
# patron = '<li.*?class="page_.*?href="(.*?)">(.*?)</a></li>'
# for scrapedurl, scrapedtitle in scrapedAll(item.url, patron):
# title = scrapertools.decodeHtmlentities(scrapedtitle)
# title = title.split("Sub")[0]
# log("url:[" + scrapedurl + "] scrapedtitle:[" + title + "]")
# itemlist.append(
# Item(channel=item.channel,
# action="dettaglio",
# contentType="tvshow",
# title="[COLOR azure]" + title + "[/COLOR]",
# url=scrapedurl,
# show=title,
# thumbnail="",
# fanart=""))
# tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
# return itemlist
def lista_genere(item):
log("lista_anime_genere", "lista_genere", item.channel)
itemlist = []
data = httptools.downloadpage(item.url).data
bloque = scrapertools.find_single_match(data,
'<div class="hentry page post-1 odd author-admin clear-block">(.*?)<div id="disqus_thread">')
patron = '<li class="cat-item cat-item.*?"><a href="(.*?)" >(.*?)</a>'
matches = re.compile(patron, re.DOTALL).findall(bloque)
scrapertools.printMatches(matches)
for scrapedurl, scrapedtitle in matches:
itemlist.append(
Item(channel=item.channel,
action="lista_generi",
title='[COLOR lightsalmon][B]' + scrapedtitle + '[/B][/COLOR]',
url=scrapedurl,
fulltitle=scrapedtitle,
show=scrapedtitle,
thumbnail=item.thumbnail))
return itemlist
def lista_generi(item):
log("animetubeita", "lista_generi", item.channel)
itemlist = []
patron = '<h2 class="title"><a href="(.*?)" rel="bookmark" title=".*?">.*?<img.*?src="(.*?)".*?<strong>Titolo</strong></td>.*?<td>(.*?)</td>.*?<td><strong>Trama</strong></td>.*?<td>(.*?)</'
for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedplot in scrapedAll(item.url, patron):
title = scrapertools.decodeHtmlentities(scrapedtitle)
title = title.split("Sub")[0]
fulltitle = re.sub(r'[Ee]pisodio? \d+', '', title)
scrapedplot = scrapertools.decodeHtmlentities(scrapedplot)
itemlist.append(
Item(channel=item.channel,
action="dettaglio",
title="[COLOR azure]" + title + "[/COLOR]",
contentType="tvshow",
fulltitle=fulltitle,
url=scrapedurl,
thumbnail=scrapedthumbnail,
show=fulltitle,
fanart=scrapedthumbnail,
plot=scrapedplot))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
# Paginazione
# ===========================================================
data = httptools.downloadpage(item.url).data
patron = '<link rel="next" href="(.*?)"'
next_page = scrapertools.find_single_match(data, patron)
if next_page != "":
itemlist.append(
Item(channel=item.channel,
action="lista_generi",
title=AvantiTxt,
url=next_page,
thumbnail=AvantiImg,
folder=True))
# ===========================================================
return itemlist
def lista_in_corso(item):
log("animetubeita", "lista_home", item.channel)
itemlist = []
patron = '<h2 class="title"><a href="(.*?)" rel="bookmark" title="Link.*?>(.*?)</a></h2>.*?<img.*?src="(.*?)".*?<td><strong>Trama</strong></td>.*?<td>(.*?)</td>'
for scrapedurl, scrapedtitle, scrapedthumbnail, scrapedplot in scrapedAll(item.url, patron):
title = scrapertools.decodeHtmlentities(scrapedtitle)
title = title.split("Sub")[0]
fulltitle = re.sub(r'[Ee]pisodio? \d+', '', title)
scrapedplot = scrapertools.decodeHtmlentities(scrapedplot)
itemlist.append(
Item(channel=item.channel,
action="dettaglio",
title="[COLOR azure]" + title + "[/COLOR]",
contentType="tvshow",
fulltitle=fulltitle,
url=scrapedurl,
thumbnail=scrapedthumbnail,
show=fulltitle,
fanart=scrapedthumbnail,
plot=scrapedplot))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
# Paginazione
# ===========================================================
data = httptools.downloadpage(item.url).data
patron = '<link rel="next" href="(.*?)"'
next_page = scrapertools.find_single_match(data, patron)
if next_page != "":
itemlist.append(
Item(channel=item.channel,
action="lista_in_corso",
title=AvantiTxt,
url=next_page,
thumbnail=AvantiImg,
folder=True))
# ===========================================================
return itemlist
def dl_s(item):
log("animetubeita", "dl_s", item.channel)
itemlist = []
encontrados = set()
# 1
patron = '<p><center><a.*?href="(.*?)"'
for scrapedurl in scrapedAll(item.url, patron):
if scrapedurl in encontrados: continue
encontrados.add(scrapedurl)
title = "DOWNLOAD & STREAMING"
itemlist.append(Item(channel=item.channel,
action="dettaglio",
title="[COLOR azure]" + title + "[/COLOR]",
url=scrapedurl,
thumbnail=item.thumbnail,
fanart=item.thumbnail,
plot=item.plot,
folder=True))
# 2
patron = '<p><center>.*?<a.*?href="(.*?)"'
for scrapedurl in scrapedAll(item.url, patron):
if scrapedurl in encontrados: continue
encontrados.add(scrapedurl)
title = "DOWNLOAD & STREAMING"
itemlist.append(Item(channel=item.channel,
action="dettaglio",
title="[COLOR azure]" + title + "[/COLOR]",
url=scrapedurl,
thumbnail=item.thumbnail,
fanart=item.thumbnail,
plot=item.plot,
folder=True))
return itemlist
def dettaglio(item):
log("animetubeita", "dettaglio", item.channel)
itemlist = []
headers = {'Upgrade-Insecure-Requests': '1',
headers = {'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0'}
episodio = 1
patron = r'<a href="http:\/\/link[^a]+animetubeita[^c]+com\/[^\/]+\/[^s]+((?:stream|strm))[^p]+php(\?.*?)"'
for phpfile, scrapedurl in scrapedAll(item.url, patron):
title = "Episodio " + str(episodio)
episodio += 1
url = "%s/%s.php%s" % (host, phpfile, scrapedurl)
headers['Referer'] = url
data = httptools.downloadpage(url, headers=headers).data
# ------------------------------------------------
cookies = ""
matches = re.compile('(.animetubeita.com.*?)\n', re.DOTALL).findall(config.get_cookie_data())
for cookie in matches:
name = cookie.split('\t')[5]
value = cookie.split('\t')[6]
cookies += name + "=" + value + ";"
headers['Cookie'] = cookies[:-1]
# ------------------------------------------------
url = scrapertools.find_single_match(data, """<source src="([^"]+)" type='video/mp4'>""")
url += '|' + urllib.urlencode(headers)
itemlist.append(Item(channel=item.channel,
action="play",
title="[COLOR azure]" + title + "[/COLOR]",
url=url,
thumbnail=item.thumbnail,
fanart=item.thumbnail,
plot=item.plot))
list_servers = ['directo']
list_quality = ['default']
return itemlist
@support.menu
def mainlist(item):
anime = [('Generi',['/generi', 'genres', 'genres']),
('Ordine Alfabetico',['/lista-anime', 'peliculas', 'list']),
('In Corso',['/category/serie-in-corso/', 'peliculas', 'in_progress'])
]
return locals()
@support.scrape
def genres(item):
blacklist = ['Ultimi Episodi', 'Serie in Corso']
patronMenu = r'<li[^>]+><a href="(?P<url>[^"]+)" >(?P<title>[^<]+)</a>'
action = 'peliculas'
return locals()
def search(item, texto):
log("animetubeita", "search", item.channel)
item.url = item.url + texto
def search(item, text):
support.log(text)
item.url = host + '/lista-anime'
item.args = 'list'
item.search = text
try:
return lista_home(item)
return peliculas(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
support.logger.error("%s" % line)
return []
def scrapedAll(url="", patron=""):
matches = []
data = httptools.downloadpage(url).data
MyPatron = patron
matches = re.compile(MyPatron, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
return matches
def newest(categoria):
support.log(categoria)
item = support.Item()
try:
if categoria == "anime":
item.contentType='tvshow'
item.url = host
item.args = "last"
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("{0}".format(line))
return []
@support.scrape
def peliculas(item):
anime = True
if not item.search: pagination = ''
action = 'episodios'
def scrapedSingle(url="", single="", patron=""):
matches = []
data = httptools.downloadpage(url).data
elemento = scrapertools.find_single_match(data, single)
matches = re.compile(patron, re.DOTALL).findall(elemento)
scrapertools.printMatches(matches)
return matches
if item.args == 'list':
search = item.search
patronBlock = r'<ul class="page-list ">(?P<block>.*?)<div class="wprc-container'
patron = r'<li.*?class="page_.*?href="(?P<url>[^"]+)">(?P<title>.*?) Sub Ita'
elif item.args == 'last':
action = 'findvideos'
item.contentType='episode'
patronBlock = r'<div class="blocks">(?P<block>.*?)<div id="sidebar'
patron = r'<h2 class="title"><a href="(?P<url>[^"]+)" [^>]+>.*?<img.*?src="(?P<thumb>[^"]+)".*?<strong>Titolo</strong></td>.*?<td>(?P<title>.*?)\s*Episodio\s*(?P<episode>\d+)[^<]+</td>.*?<td><strong>Trama</strong></td>\s*<td>(?P<plot>[^<]+)<'
elif item.args in ['in_progress','genres']:
patronBlock = r'<div class="blocks">(?P<block>.*?)<div id="sidebar'
patron = r'<h2 class="title"><a href="(?P<url>[^"]+)"[^>]+>(?P<title>.*?)\s* Sub Ita[^<]+</a></h2>.*?<img.*?src="(?P<thumb>.*?)".*?<td><strong>Trama</strong></td>.*?<td>(?P<plot>[^<]+)<'
patronNext = r'href="([^"]+)" >&raquo;'
else:
patronBlock = r'<div class="blocks">(?P<block>.*?)<div id="sidebar'
patron = r'<img.*?src="(?P<thumb>[^"]+)".*?<strong>Titolo</strong></td>.*?<td>\s*(?P<title>.*?)\s*Episodio[^<]+</td>.*?<td><strong>Trama</strong></td>\s*<td>(?P<plot>[^<]+)<.*?<a.*?href="(?P<url>[^"]+)"'
patronNext = r'href="([^"]+)" >&raquo;'
return locals()
def log(funzione="", stringa="", canale=""):
logger.debug("[" + canale + "].[" + funzione + "] " + stringa)
@support.scrape
def episodios(item):
patronBlock = r'<h6>Episodio</h6>(?P<block>.*?)(?:<!--|</table>)'
patron = r'<strong>(?P<title>[^<]+)</strong>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+><a href="http://link\.animetubeita\.com/2361078/(?P<url>[^"]+)"'
action = 'findvideos'
return locals()
def findvideos(item):
itemlist=[]
if item.args == 'last':
match = support.match(item, r'href="(?P<url>[^"]+)"[^>]+><strong>DOWNLOAD &amp; STREAMING</strong>', url=item.url)[0]
if match:
patronBlock = r'<h6>Episodio</h6>(?P<block>.*?)(?:<!--|</table>)'
patron = r'<a href="http://link\.animetubeita\.com/2361078/(?P<url>[^"]+)"'
match = support.match(item, patron, patronBlock, headers, match[0])[0]
else: return itemlist
AnimeThumbnail = "http://img15.deviantart.net/f81c/i/2011/173/7/6/cursed_candies_anime_poster_by_careko-d3jnzg9.jpg"
AnimeFanart = "http://www.animetubeita.com/wp-content/uploads/21407_anime_scenery.jpg"
CategoriaThumbnail = "http://static.europosters.cz/image/750/poster/street-fighter-anime-i4817.jpg"
CategoriaFanart = "http://www.animetubeita.com/wp-content/uploads/21407_anime_scenery.jpg"
CercaThumbnail = "http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search"
CercaFanart = "https://i.ytimg.com/vi/IAlbvyBdYdY/maxresdefault.jpg"
AvantiTxt = config.get_localized_string(30992)
AvantiImg = "http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png"
if match: item.url = match[-1]
else: return itemlist
data = support.httptools.downloadpage(item.url, headers=headers).data
cookies = ""
matches = re.compile('(.animetubeita.com.*?)\n', re.DOTALL).findall(support.config.get_cookie_data())
for cookie in matches:
name = cookie.split('\t')[5]
value = cookie.split('\t')[6]
cookies += name + "=" + value + ";"
headers['Referer'] = item.url
headers['Cookie'] = cookies[:-1]
url = support.scrapertoolsV2.find_single_match(data, """<source src="([^"]+)" type='video/mp4'>""")
if not url: url = support.scrapertoolsV2.find_single_match(data, 'file: "([^"]+)"')
if url:
url += '|' + urllib.urlencode(headers)
itemlist.append(
support.Item(channel=item.channel,
action="play",
title='diretto',
server='directo',
quality='',
url=url,
thumbnail=item.thumbnail,
fulltitle=item.fulltitle,
show=item.show,
contentType=item.contentType,
folder=False))
return support.server(item, itemlist=itemlist)

View File

@@ -1,22 +1,15 @@
{
"id": "bleachportal",
"name": "BleachPortal",
"language": ["ita"],
"language": ["sub-ita"],
"active": true,
"adult": false,
"fanart": "http://i39.tinypic.com/35ibvcx.jpg",
"thumbnail": "http://www.bleachportal.it/images/index_r1_c1.jpg",
"banner": "http://cgi.di.uoa.gr/~std05181/images/bleach.jpg",
"deprecated": true,
"adult": false,
"fanart": "https://www.thetvdb.com/banners/fanart/original/74796-29.jpg",
"thumbnail": "bleachportal.png",
"banner": "bleachportal.png",
"categories": ["anime"],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": false
}
]
"not_active":["include_in_newests", "include_in_global_search"],
"settings": []
}

View File

@@ -11,6 +11,7 @@ from core import scrapertools, httptools
from core.item import Item
from platformcode import logger
from platformcode import config
from core import support
host = "http://www.bleachportal.it"
@@ -19,17 +20,19 @@ def mainlist(item):
logger.info("[BleachPortal.py]==> mainlist")
itemlist = [Item(channel=item.channel,
action="episodi",
title="[COLOR azure] Bleach [/COLOR] - [COLOR deepskyblue]Lista Episodi[/COLOR]",
title= support.typo('Bleach','bold'),
url=host + "/streaming/bleach/stream_bleach.htm",
thumbnail="http://i45.tinypic.com/286xp3m.jpg",
fanart="http://i40.tinypic.com/5jsinb.jpg",
thumbnail="https://www.thetvdb.com/banners/posters/74796-14.jpg",
banner="https://www.thetvdb.com/banners/graphical/74796-g6.jpg",
fanart="https://www.thetvdb.com/banners/fanart/original/74796-30.jpg",
extra="bleach"),
Item(channel=item.channel,
action="episodi",
title="[COLOR azure] D.Gray Man [/COLOR] - [COLOR deepskyblue]Lista Episodi[/COLOR]",
title=support.typo('D.Gray Man','bold'),
url=host + "/streaming/d.gray-man/stream_dgray-man.htm",
thumbnail="http://i59.tinypic.com/9is3tf.jpg",
fanart="http://wallpapercraft.net/wp-content/uploads/2016/11/Cool-D-Gray-Man-Background.jpg",
thumbnail="https://www.thetvdb.com/banners/posters/79635-1.jpg",
banner="https://www.thetvdb.com/banners/graphical/79635-g4.jpg",
fanart="https://www.thetvdb.com/banners/fanart/original/79635-6.jpg",
extra="dgrayman")]
return itemlist
@@ -40,7 +43,7 @@ def episodi(item):
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<td>?[<span\s|<width="\d+%"\s]+?class="[^"]+">\D+([\d\-]+)\s?<[^<]+<[^<]+<[^<]+<[^<]+<.*?\s+?.*?<span style="[^"]+">([^<]+).*?\s?.*?<a href="\.*(/?[^"]+)">'
patron = r'<td>?[<span\s|<width="\d+%"\s]+?class="[^"]+">\D+([\d\-]+)\s?<[^<]+<[^<]+<[^<]+<[^<]+<.*?\s+?.*?<span style="[^"]+">([^<]+).*?\s?.*?<a href="\.*(/?[^"]+)">'
matches = re.compile(patron, re.DOTALL).findall(data)
animetitle = "Bleach" if item.extra == "bleach" else "D.Gray Man"
@@ -49,19 +52,19 @@ def episodi(item):
itemlist.append(
Item(channel=item.channel,
action="findvideos",
title="[COLOR azure]%s Ep: [COLOR deepskyblue]%s[/COLOR][/COLOR]" % (animetitle, scrapednumber),
title=support.typo("%s Episodio %s" % (animetitle, scrapednumber),'bold'),
url=item.url.replace("stream_bleach.htm",scrapedurl) if "stream_bleach.htm" in item.url else item.url.replace("stream_dgray-man.htm", scrapedurl),
plot=scrapedtitle,
extra=item.extra,
thumbnail=item.thumbnail,
fanart=item.fanart,
fulltitle="[COLOR red]%s Ep: %s[/COLOR] | [COLOR deepskyblue]%s[/COLOR]" % (animetitle, scrapednumber, scrapedtitle)))
fulltitle="%s Ep: %s | %s" % (animetitle, scrapednumber, scrapedtitle)))
if item.extra == "bleach":
itemlist.append(
Item(channel=item.channel,
action="oav",
title="[B][COLOR azure] OAV e Movies [/COLOR][/B]",
title=support.typo("OAV e Movies",'bold color kod'),
url=item.url.replace("stream_bleach.htm", "stream_bleach_movie_oav.htm"),
extra=item.extra,
thumbnail=item.thumbnail,
@@ -75,19 +78,19 @@ def oav(item):
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<td>?[<span\s|<width="\d+%"\s]+?class="[^"]+">-\s+(.*?)<[^<]+<[^<]+<[^<]+<[^<]+<.*?\s+?.*?<span style="[^"]+">([^<]+).*?\s?.*?<a href="\.*(/?[^"]+)">'
patron = r'<td>?[<span\s|<width="\d+%"\s]+?class="[^"]+">-\s+(.*?)<[^<]+<[^<]+<[^<]+<[^<]+<.*?\s+?.*?<span style="[^"]+">([^<]+).*?\s?.*?<a href="\.*(/?[^"]+)">'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapednumber, scrapedtitle, scrapedurl in matches:
itemlist.append(
Item(channel=item.channel,
action="findvideos",
title="[COLOR deepskyblue] " + scrapednumber + " [/COLOR]",
title=support.typo(scrapednumber, 'bold'),
url=item.url.replace("stream_bleach_movie_oav.htm", scrapedurl),
plot=scrapedtitle,
extra=item.extra,
thumbnail=item.thumbnail,
fulltitle="[COLOR red]" + scrapednumber + "[/COLOR] | [COLOR deepskyblue]" + scrapedtitle + "[/COLOR]"))
fulltitle=scrapednumber + " | " + scrapedtitle))
return list(reversed(itemlist))
@@ -109,7 +112,7 @@ def findvideos(item):
itemlist.append(
Item(channel=item.channel,
action="play",
title="[[COLOR orange]Diretto[/COLOR]] [B]%s[/B]" % item.title,
title="Diretto %s" % item.title,
url=item.url.replace(item.url.split("/")[-1], "/" + video),
thumbnail=item.thumbnail,
fulltitle=item.fulltitle))

View File

@@ -4,8 +4,8 @@
"language": ["ita", "sub-ita"],
"active": true,
"adult": false,
"thumbnail": "https://raw.githubusercontent.com/Zanzibar82/images/master/posters/casacinema.png",
"banner": "https://raw.githubusercontent.com/Zanzibar82/images/master/posters/casacinema.png",
"thumbnail": "casacinema.png",
"banner": "casacinema.png",
"categories": ["tvshow", "movie","vos"],
"settings": [
]

View File

@@ -32,7 +32,7 @@ def findhost():
headers = [['Referer', host]]
if host.endswith('/'):
host = host[:-1]
findhost()
list_servers = ['supervideo', 'streamcherry','rapidvideo', 'streamango', 'openload']
list_quality = ['default', 'HD', '3D', '4K', 'DVD', 'SD']
@@ -40,7 +40,7 @@ list_quality = ['default', 'HD', '3D', '4K', 'DVD', 'SD']
@support.menu
def mainlist(item):
support.log(item)
findhost()
film = ['',
('Al Cinema', ['/category/in-sala/', 'peliculas', '']),
('Novità', ['/category/nuove-uscite/', 'peliculas', '']),
@@ -55,7 +55,8 @@ def mainlist(item):
def peliculas(item):
support.log(item)
#support.dbg() # decommentare per attivare web_pdb
#findhost()
blacklist = ['']
if item.args != 'search':
patron = r'<div class="col-mt-5 postsh">[^<>]+<div class="poster-media-card">[^<>]+<a href="(?P<url>[^"]+)" title="(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA)\])?".*?<img(?:.+?)?src="(?P<thumb>[^"]+)"'
@@ -66,6 +67,7 @@ def peliculas(item):
patronNext = '<a href="([^"]+)"\s+?><i class="glyphicon glyphicon-chevron-right"'
#support.regexDbg(item, patronBlock, headers)
#debug = True
return locals()
@@ -86,6 +88,7 @@ def genres(item):
def search(item, text):
support.log('search', item)
findhost()
itemlist = []
text = text.replace(' ', '+')
item.args = 'search'
@@ -101,8 +104,10 @@ def search(item, text):
def newest(categoria):
support.log('newest ->', categoria)
findhost()
itemlist = []
item = Item()
try:
if categoria == 'peliculas':
item.url = host

View File

@@ -200,7 +200,7 @@ def findvideos(item):
def findvid_serie(item):
def load_vid_series(html, item, itemlist, blktxt):
logger.info('HTML' + html)
patron = '<a href="([^"]+)"[^=]+="_blank"[^>]+>(.*?)</a>'
patron = r'<a href="([^"]+)"[^=]+="_blank"[^>]+>(?!<!--)(.*?)</a>'
# Estrae i contenuti
matches = re.compile(patron, re.DOTALL).finditer(html)
for match in matches:

View File

@@ -4,8 +4,8 @@
"language": ["ita"],
"active": true,
"adult": false,
"thumbnail": "https://www.cinemalibero.center/wp-content/themes/Cinemalibero%202.0/images/logo02.png",
"banner": "https://www.cinemalibero.center/wp-content/themes/Cinemalibero%202.0/images/logo02.png",
"thumbnail": "cinemalibero.png",
"banner": "cinemalibero.png",
"categories": ["movie","tvshow","anime"],
"not_active": ["include_in_newest"],
"settings": []

View File

@@ -133,7 +133,7 @@ def episodios(item): # Questa def. deve sempre essere nominata episodios
url=item.url,
show=item.fulltitle,
contentType='movie'))
debug = True
#debug = True
return locals()
@support.scrape
@@ -157,7 +157,7 @@ def select(item):
return episodios(Item(channel=item.channel,
title=item.title,
fulltitle=item.fulltitle,
contentSerieName = fulltitle,
contentSerieName = item.fulltitle,
url=item.url,
extra='serie',
contentType='episode'))

View File

@@ -26,7 +26,7 @@ def findhost():
host = 'https://www.'+permUrl['location'].replace('https://www.google.it/search?q=site:', '')
headers = [['Referer', host]]
findhost()
list_servers = ['verystream', 'wstream', 'speedvideo', 'flashx', 'nowvideo', 'streamango', 'deltabit', 'openload']
list_quality = ['default']
@@ -34,13 +34,12 @@ list_quality = ['default']
@support.menu
def mainlist(item):
support.log()
findhost()
tvshow = [''
]
anime = ['/category/anime-cartoni-animati/'
]
mix = [
(support.typo('Aggiornamenti Serie-Anime', 'bullet bold'), ['/aggiornamento-episodi/', 'peliculas', 'newest']),
(support.typo('Archivio Serie-Anime', 'bullet bold'), ['/category/serie-tv-archive/', 'peliculas'])
@@ -53,7 +52,7 @@ def mainlist(item):
@support.scrape
def peliculas(item):
support.log()
#findhost()
action = 'episodios'
if item.args == 'newest':
#patron = r'<span class="serieTitle" style="font-size:20px">(?P<title>.*?).[^][\s]?<a href="(?P<url>[^"]+)"\s+target="_blank">(?P<episode>\d+x\d+-\d+|\d+x\d+) (?P<title2>.*?)[ ]?(?:|\((?P<lang>SUB ITA)\))?</a>'
@@ -62,13 +61,14 @@ def peliculas(item):
else:
patron = r'<div class="post-thumb">.*?\s<img src="(?P<thumb>[^"]+)".*?><a href="(?P<url>[^"]+)"[^>]+>(?P<title>.+?)\s?(?: Serie Tv)?\s?\(?(?P<year>\d{4})?\)?<\/a><\/h2>'
patronNext='a class="next page-numbers" href="?([^>"]+)">Avanti &raquo;</a>'
#debug = True
return locals()
@support.scrape
def episodios(item):
support.log("episodios: %s" % item)
#findhost()
action = 'findvideos'
item.contentType = 'tvshow'
@@ -76,7 +76,8 @@ def episodios(item):
data1 = pagina(item.url)
data1 = re.sub('\n|\t', ' ', data1)
data = re.sub(r'>\s+<', '> <', data1)
patronBlock = r'(?P<block>STAGIONE\s\d+ (.+?)?(?:\()?(?P<lang>ITA|SUB ITA)(?:\))?.*?)</div></div>'
#patronBlock = r'(?P<block>STAGIONE\s\d+ (.+?)?(?:\()?(?P<lang>ITA|SUB ITA)(?:\))?.*?)</div></div>'
patronBlock = r'</span>(?P<block>[a-zA-Z\s]+\d+(.+?)?(?:\()?(?P<lang>ITA|SUB ITA)(?:\))?.*?)</div></div>'
#patron = r'(?:\s|\Wn)?(?:<strong>|)?(?P<episode>\d+&#\d+;\d+-\d+|\d+&#\d+;\d+)(?:</strong>|)?(?P<title>.+?)(?:|-.+?-|–.+?–|–|.)?<a (?P<url>.*?)<br />'
patron = r'(?:\s|\Wn)?(?:<strong>|)?(?P<episode>\d+&#\d+;\d+-\d+|\d+&#\d+;\d+)(?:</strong>|)?(?P<title>.+?)(?:–|-.+?-|–.+?–|–|.)?(?:<a (?P<url>.*?))?<br />'
@@ -91,6 +92,7 @@ def episodios(item):
def pagina(url):
support.log(url)
#findhost()
data = httptools.downloadpage(url, headers=headers).data.replace("'", '"')
#support.log("DATA ----###----> ", data)
@@ -110,6 +112,7 @@ def pagina(url):
# =========== def ricerca =============
def search(item, texto):
support.log()
findhost()
item.url = "%s/?s=%s" % (host, texto)
item.contentType = 'tvshow'
@@ -127,6 +130,7 @@ def search(item, texto):
def newest(categoria):
support.log()
findhost()
itemlist = []
item = Item()
item.contentType = 'tvshow'

View File

@@ -47,7 +47,7 @@ def mainlist(item):
##
## action = 'episodios'
## block = r'(?P<block>.*?)<div\s+class="btn btn-lg btn-default btn-load-other-series">'
##
##
## if item.args == 'ined':
## deflang = 'SUB-ITA'
## patronBlock = r'<span\s+class="label label-default label-title-typology">'+block
@@ -68,7 +68,7 @@ def mainlist(item):
## elif item.args == 'classic':
## patronBlock = r'<h2 class="title-typology styck-top" meta-class="title-serie-classiche">'+block
## patron = r'<a href="(?P<url>[^"]+)".*?>\s<img\s.*?src="(?P<thumb>[^"]+)"\s/>[^>]+>[^>]+>\s[^>]+>\s(?P<year>\d{4})?\s.+?class="strongText">(?P<title>.+?)<'
## pagination = 25
## pagination = 25
## else:
## patronBlock = r'<div\s+class="container container-title-serie-new container-scheda" meta-slug="new">'+block
## patron = r'<a href="(?P<url>[^"]+)".*?>\s<img\s.*?src="(?P<thumb>[^"]+)"\s/>[^>]+>[^>]+>\s[^>]+>\s(?P<year>\d{4})?\s.+?class="strongText">(?P<title>.+?)<'
@@ -84,13 +84,13 @@ def peliculas(item):
action = 'episodios'
blacklist = ['DMCA']
if item.args == 'genres' or item.args == 'search':
patronBlock = r'<h2 style="color: white !important" class="title-typology">(?P<block>.+?)<div class="container-fluid whitebg" style="">'
patronBlock = r'<h2 style="color:\s?white !important;?" class="title-typology">(?P<block>.+?)<div class="container-fluid whitebg" style="">'
patron = r'<a href="(?P<url>[^"]+)".*?>\s<img\s.*?src="(?P<thumb>[^"]+)"\s/>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)</p>'
patronNext = r'rel="next" href="([^"]+)">'
item.contentType = 'tvshow'
## elif item.args == 'search':
## elif item.args == 'search':
## patronBlock = r'<h2 style="color:\s?white !important.?" class="title-typology">(?P<block>.*?)<div class="container-fluid whitebg" style="">'
## patron = r'<a href="(?P<url>[^"]+)".*?>\s<img\s.*?src="(?P<thumb>[^"]+)"\s/>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)</p>'
else:
@@ -106,21 +106,23 @@ def peliculas(item):
patronBlock = r'<div\s+class="container-fluid greybg title-serie-lastep title-last-ep fixed-title-wrapper containerBottomBarTitle">'+end_block
patron = r'<a(?: rel="[^"]+")? href="(?P<url>[^"]+)"(?: class="[^"]+")?>[ ]<img class="[^"]+"[ ]title="[^"]+"[ ]alt="[^"]+"[ ](?:|meta-)?src="(?P<thumb>[^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?:\d+.\d+)[ ]\((?P<lang>[a-zA-Z\-]+)[^<]+<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)<'
elif item.args == 'nolost':
patronBlock = r'<h2 class="title-typology styck-top" meta-class="title-serie-danonperd">'+end_block
patronBlock = r'<h2 class="title-typology styck-top" meta-class="title-serie-danonperd">'+end_block
## pagination = 25
elif item.args == 'classic':
patronBlock = r'<h2 class="title-typology styck-top" meta-class="title-serie-classiche">'+end_block
## patron = r'<a href="(?P<url>[^"]+)".*?>\s<img\s.*?src="(?P<thumb>[^"]+)"\s/>[^>]+>[^>]+>\s[^>]+>\s(?P<year>\d{4})?\s.+?class="strongText">(?P<title>.+?)<'
## pagination = 25
## elif item.args == 'anime':
##
##
else:
patronBlock = r'<div\s+class="container container-title-serie-new container-scheda" meta-slug="new">'+end_block
## patron = r'<a href="(?P<url>[^"]+)".*?>\s<img\s.*?src="(?P<thumb>[^"]+)"\s/>[^>]+>[^>]+>\s[^>]+>\s(?P<year>\d{4})?\s.+?class="strongText">(?P<title>.+?)<'
## pagination = 25
#support.regexDbg(item, patron, headers)
#support.regexDbg(item, patronBlock, headers)
#debug = True
return locals()
@support.scrape
def episodios(item):
log()
@@ -132,7 +134,7 @@ def episodios(item):
def itemHook(item):
item.title = item.title.replace(item.fulltitle, '').replace('-','',1)
return item
#debug = True
return locals()
@@ -174,9 +176,6 @@ def newest(categoria):
item.action = "peliculas"
itemlist = peliculas(item)
## if itemlist[-1].action == "peliculas":
## itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys

View File

@@ -3,18 +3,9 @@
"name": "IlGenioDelloStreaming",
"active": true,
"adult": false,
"language": ["ita", "vos"],
"thumbnail": "https://i.imgur.com/Nsa81r0.png",
"banner": "https://i.imgur.com/Nsa81r0.png",
"language": ["ita", "sub-ita"],
"thumbnail": "ilgeniodellostreaming.png",
"banner": "ilgeniodellostreaming.png",
"categories": ["movie", "tvshow", "anime", "vos"],
"settings": [
{
"id": "include_in_newest_anime",
"type": "bool",
"label": "Includi in Novità - Anime",
"default": false,
"enabled": false,
"visible": false
}
]
"settings": []
}

View File

@@ -6,9 +6,8 @@
"""
Problemi noti che non superano il test del canale:
NESSUNO (update 13-9-2019)
Alcuni video non si aprono sul sito...
Avvisi per il test:
i link per le categorie non sono TUTTI visibili nella pagina del sito:
vanno costruiti con i nomi dei generi che vedete nel CANALE.
@@ -22,17 +21,20 @@
genere-> televisione film
https://ilgeniodellostreaming.se/genere/televisione-film
Non va abilitato per:
Novità -> Anime
La pagina "Aggiornamenti Anime" del sito è vuota (update 13-9-2019)
Novità -> Serietv e Aggiornamenti nel canale:
- le pagine sono di 25 titoli
##### note per i dev #########
- La pagina "Aggiornamenti Anime" del sito è vuota (update 13-9-2019)
- in url: film o serietv
"""
import re
from platformcode import logger
from core import scrapertoolsV2, httptools, tmdb, support
from core.support import log, menu, aplay
from core import scrapertoolsV2, httptools, support
from core.support import log
from core.item import Item
from platformcode import config
@@ -49,136 +51,88 @@ def mainlist(item):
support.log(item)
film = ['/film/',
('Film Per Categoria',['', 'category', 'genres']),
('Film Per Anno',['', 'category', 'year']),
('Film Per Lettera',['/film-a-z/', 'category', 'letter']),
('Generi',['', 'genres', 'genres']),
('Per Lettera',['/film-a-z/', 'genres', 'letter']),
('Anni',['', 'genres', 'year']),
('Popolari',['/trending/?get=movies', 'peliculas', 'populared']),
('Più Votati', ['/ratings/?get=movies', 'peliculas', 'populared'])
]
tvshow = ['/serie/',
('Nuovi Episodi', ['/aggiornamenti-serie/', 'newep', 'tvshow']),
('TV Show', ['/tv-show/', 'peliculas', 'showtv', 'tvshow'])
]
anime = ['/anime/']
tvshow = ['/serie/',
('Aggiornamenti', ['/aggiornamenti-serie/', 'peliculas', 'update']),
('Popolari',['/trending/?get=tv', 'peliculas', 'populared']),
('Più Votati', ['/ratings/?get=tv', 'peliculas', 'populared'])
]
anime = ['/anime/'
]
Tvshow = [
('Show TV', ['/tv-show/', 'peliculas', '', 'tvshow'])
]
search = ''
return locals()
@support.scrape
def category(item):
log(item)
action='peliculas'
if item.args == 'genres':
patronBlock = r'<div class="sidemenu"><h2>Genere</h2>(?P<block>.*?)/li></ul></div>'
elif item.args == 'year':
patronBlock = r'<div class="sidemenu"><h2>Anno di uscita</h2>(?P<block>.*?)/li></ul></div>'
elif item.args == 'letter':
patronBlock = r'<div class="movies-letter">(?P<block>.*?)<div class="clearfix">'
patron = r'<a(?:.+?)?href="(?P<url>.*?)"[ ]?>(?P<title>.*?)<\/a>'
## debug = True
return locals()
def search(item, texto):
log(texto)
item.url = host + "/?s=" + texto
try:
return peliculas(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
@support.scrape
def peliculas(item):
log(item)
## import web_pdb; web_pdb.set_trace()
log()
if item.args == 'search':
if item.action == 'search':
patronBlock = r'<div class="search-page">(?P<block>.*?)<footer class="main">'
patron = r'<div class="thumbnail animation-2"><a href="(?P<url>[^"]+)">'\
'<img src="(?P<thumb>[^"]+)" alt="[^"]+" \/>[^>]+>(?P<type>[^<]+)'\
'<\/span>.*?<a href.*?>(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA)\])?'\
'<\/a>[^>]+>(?:<span class="rating">IMDb\s*(?P<rating>[0-9.]+)<\/span>)?'\
'.+?(?:<span class="year">(?P<year>[0-9]+)<\/span>)?[^>]+>[^>]+><p>(?P<plot>.*?)<\/p>'
type_content_dict={'movie': ['film'], 'tvshow': ['tv']}
type_action_dict={'findvideos': ['film'], 'episodios': ['tv']}
## elif item.args == 'newest':
## patronBlock = r'<div class="content"><header><h1>Aggiornamenti Serie</h1>'\
## '</header>(?P<block>.*?)</li></ul></div></div></div>'
## patron = r'src="(?P<thumb>[^"]+)".*?href="(?P<url>[^"]+)">[^>]+>(?P<episode>[^<]+)'\
## '<.*?"c">(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA)\])?<.+?<span class='\
## '"quality">(\5SUB-ITA|.+?)</span>'
## type_content_dict={'movie': ['film'], 'tvshow': ['tv']}
## type_action_dict={'findvideos': ['film'], 'episodios': ['tv']}
def itemHook(item):
if 'film' not in item.url:
item.contentType = 'tvshow'
item.action = 'episodios'
return item
else:
elif item.args == 'letter':
patron = r'<td class="mlnh-2"><a href="(?P<url>[^"]+)">(?P<title>.+?)'\
'[ ]?(?:\[(?P<lang>Sub-ITA)\])?<[^>]+>[^>]+>[^>]+>(?P<year>\d{4})\s+<'
elif item.args == 'populared':
patron = r'<div class="poster"><a href="(?P<url>[^"]+)"><img src='\
'"(?P<thumb>[^"]+)" alt="[^"]+"><\/a>[^>]+>[^>]+>[^>]+> '\
'(?P<rating>[0-9.]+)<[^>]+>[^>]+>'\
'(?P<quality>[3]?[D]?[H]?[V]?[D]?[/]?[R]?[I]?[P]?)(?:SUB-ITA)?<'\
'[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>.+?)(?:[ ]\[(?P<lang>Sub-ITA)\])?<'\
'[^>]+>[^>]+>[^>]+>(?P<year>\d{4})?<'
if item.contentType == 'movie':
endBlock = '</article></div>'
else:
endBlock = '<footer class="main">'
elif item.args == 'showtv':
action = 'episodios'
patron = r'<div class="poster"><a href="(?P<url>[^"]+)"><img src="(?P<thumb>[^"]+)" '\
'alt="[^"]+"><\/a>[^>]+>[^>]+>[^>]+> (?P<rating>[0-9.]+)<[^>]+>[^>]+>[^>]+>'\
'[^>]+>[^>]+>(?P<title>.+?)<[^>]+>[^>]+>[^>]+>(?P<year>\d{4})?<[^>]+>[^>]+>'\
'[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<plot>.+?)<'
patronBlock = r'<header><h1>.+?</h1>(?P<block>.*?)'+endBlock
elif item.contentType == 'movie' and item.args != 'genres':
patronBlock = r'<header><h1>Film</h1>(?P<block>.*?)<div class="pagination">'
patron = r'<div class="poster">\s*<a href="(?P<url>[^"]+)"><img src="(?P<thumb>[^"]+)" '\
'alt="[^"]+"><\/a>[^>]+>[^>]+>[^>]+>\s*(?P<rating>[0-9.]+)<\/div>'\
'<span class="quality">(?:SUB-ITA|)?(?P<quality>|[^<]+)?'\
'<\/span>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA)\])?'\
'<\/a>[^>]+>'\
'[^>]+>(?P<year>[^<]+)<\/span>[^>]+>[^>]+>'\
'[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<plot>[^<]+)<div'
if item.contentType == 'movie':
if item.args == 'letter':
patronBlock = r'<table class="table table-striped">(?P<block>.+?)</table>'
patron = r'<img src="(?P<thumb>[^"]+)"[^>]+>[^>]+>[^>]+><td class="mlnh-2"><a href="(?P<url>[^"]+)">(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA)\])?<[^>]+>[^>]+>[^>]+>(?P<year>\d{4})\s+<'
elif item.args == 'populared':
patron = r'<img src="(?P<thumb>[^"]+)" alt="[^"]+">[^>]+>[^>]+>[^>]+>[^>]+>\s+?(?P<rating>\d+.?\d+|\d+)<[^>]+>[^>]+>(?P<quality>[a-zA-Z\-]+)[^>]+>[^>]+>[^>]+>[^>]+><a href="(?P<url>[^"]+)">(?P<title>[^<]+)<[^>]+>[^>]+>[^>]+>(?P<year>\d+)<'
else:
elif item.contentType == 'tvshow' or item.args == 'genres':
action = 'episodios'
patron = r'<div class="poster">\s*<a href="(?P<url>[^"]+)"><img '\
'src="(?P<thumb>[^"]+)" alt="[^"]+"><\/a>[^>]+>[^>]+>[^>]+> '\
'(?P<rating>[0-9.]+)<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>'\
'(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA|Sub-ita)\])?<[^>]+>[^>]+>'\
'[^>]+>(?P<year>[^<]+)<.*?<div class="texto">(?P<plot>[^<]+)'
## else:
## patron = r'<div class="thumbnail animation-2"><a href="(?P<url>[^"]+)">'\
## '<img src="(?P<thumb>[^"]+)" alt="[^"]+" \/>'\
## '[^>]+>(?P<type>[^<]+)<\/span>.*?<a href.*?>(?P<title>[^<]+)'\
## '<\/a>(?P<lang>[^>])+>[^>]+>(?:<span class="rating">IMDb\s*'\
## '(?P<rating>[0-9.]+)<\/span>)?.*?(?:<span class="year">(?P<year>[0-9]+)'\
## '<\/span>)?[^>]+>[^>]+><p>(?P<plot>.*?)<\/p>'
#### type_content_dict={'movie': ['film'], 'tvshow': ['tv']}
#### type_action_dict={'findvideos': ['film'], 'episodios': ['tv']}
#patron = r'<div class="poster">\s*<a href="(?P<url>[^"]+)"><img src="(?P<thumb>[^"]+)" alt="[^"]+"><\/a>[^>]+>[^>]+>[^>]+>\s*(?P<rating>[0-9.]+)<\/div><span class="quality">(?:SUB-ITA|)?(?P<quality>|[^<]+)?<\/span>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA)\])?<\/a>[^>]+>[^>]+>(?P<year>[^<]+)<\/span>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<plot>[^<]+)<div'
patron = r'<div class="poster">\s?<a href="(?P<url>[^"]+)"><img src="(?P<thumb>[^"]+)" alt="[^"]+"><\/a>[^>]+>[^>]+>[^>]+>\s*(?P<rating>[0-9.]+)<\/div>(?:<span class="quality">(?:SUB-ITA|)?(?P<quality>|[^<]+)?<\/span>)?[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA)\])?<\/a>[^>]+>[^>]+>(?P<year>[^<]+)<\/span>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<plot>[^<]+)<div'
else:
# TVSHOW
action = 'episodios'
if item.args == 'update':
action = 'findvideos'
patron = r'<div class="poster"><img src="(?P<thumb>[^"]+)"[^>]+>[^>]+><a href="(?P<url>[^"]+)">[^>]+>(?P<episode>[\d\-x]+)[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>.+?)(?:\[(?P<lang>Sub-ITA|Sub-ita)\])?<[^>]+>[^>]+>[^>]+>[^>]+>(?P<quality>[HD]+)?(?:.+?)?/span><p class="serie"'
pagination = 25
def itemHook(item):
item.contentType = 'episode'
return item
else:
patron = r'<div class="poster">\s?<a href="(?P<url>[^"]+)"><img src="(?P<thumb>[^"]+)" alt="[^"]+"><\/a>[^>]+>[^>]+>[^>]+> (?P<rating>[0-9.]+)<[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>.+?)[ ]?(?:\[(?P<lang>Sub-ITA|Sub-ita)\])?<[^>]+>[^>]+>[^>]+>(?P<year>[^<]+)(?:<.*?<div class="texto">(?P<plot>[^<]+))?'
patronNext = '<span class="current">[^<]+<[^>]+><a href="([^"]+)"'
## debug = True
return locals()
@support.scrape
def newep(item):
patron = r'<div class="poster"><img src="(?P<thumb>[^"]+)" alt="(?:.+?)[ ]?'\
'(?:\[(?P<lang>Sub-ITA|Sub-ita)\])?">[^>]+><a href="(?P<url>[^"]+)">'\
'[^>]+>(?P<episode>[^<]+)<[^>]+>[^>]+>[^>]+><span class="c">'\
'(?P<title>.+?)[ ]?(?:\[Sub-ITA\]|)<'
pagination = 10
## debug = True
#support.regexDbg(item, patron, headers)
#debug = True
return locals()
@@ -195,35 +149,62 @@ def episodios(item):
# debug = True
return locals()
@support.scrape
def genres(item):
log(item)
action='peliculas'
if item.args == 'genres':
patronBlock = r'<div class="sidemenu"><h2>Genere</h2>(?P<block>.*?)/li></ul></div>'
elif item.args == 'year':
item.args = 'genres'
patronBlock = r'<div class="sidemenu"><h2>Anno di uscita</h2>(?P<block>.*?)/li></ul></div>'
elif item.args == 'letter':
patronBlock = r'<div class="movies-letter">(?P<block>.*?)<div class="clearfix">'
patron = r'<a(?:.+?)?href="(?P<url>.*?)"[ ]?>(?P<title>.*?)<\/a>'
## debug = True
return locals()
def search(item, text):
log(text)
itemlist = []
text = text.replace(' ', '+')
item.url = host + "/?s=" + text
try:
item.args = 'search'
return peliculas(item)
except:
import sys
for line in sys.exc_info():
log("%s" % line)
return []
def newest(categoria):
log(categoria)
itemlist = []
item = Item()
action = peliculas
if categoria == 'peliculas':
item.contentType = 'movie'
item.url = host + '/film/'
elif categoria == 'series':
action = newep
#item.args = 'newest'
item.args = 'update'
item.contentType = 'tvshow'
item.url = host + '/aggiornamenti-serie/'
## elif categoria == 'anime':
## item.contentType = 'tvshow'
## item.url = host + '/anime/'
try:
item.action = action
itemlist = action(item)
item.action = 'peliculas'
itemlist = peliculas(item)
if itemlist[-1].action == action:
itemlist.pop()
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
log("{0}".format(line))
return []
return itemlist
@@ -232,9 +213,31 @@ def newest(categoria):
def findvideos(item):
log()
itemlist =[]
matches, data = support.match(item, '<iframe class="metaframe rptss" src="([^"]+)"[^>]+>',headers=headers)
for url in matches:
html = httptools.downloadpage(url, headers=headers).data
data += str(scrapertoolsV2.find_multiple_matches(html, '<meta name="og:url" content="([^"]+)">'))
itemlist = support.server(item, data)
if item.args == 'update':
data = httptools.downloadpage(item.url).data
patron = r'<div class="item"><a href="'+host+'/serietv/([^"\/]+)\/"><i class="icon-bars">'
series = scrapertoolsV2.find_single_match(data, patron)
titles = support.typo(series.upper().replace('-', ' '), 'bold color kod')
goseries = support.typo("Vai alla Serie:", ' bold')
itemlist.append(
Item(channel=item.channel,
title=goseries + titles,
fulltitle=titles,
show=series,
contentType='tvshow',
contentSerieName=series,
url=host+"/serietv/"+series,
action='episodios',
contentTitle=titles,
plot = "Vai alla Serie :" + titles + " con tutte le puntate",
))
return itemlist

View File

@@ -1,63 +0,0 @@
{
"id": "italiafilmhd",
"name": "ItaliaFilm HD",
"active": true,
"adult": false,
"language": ["ita"],
"thumbnail": "https://italiafilm.network/wp-content/uploads/2018/06/ITALIAFILM-HD.png",
"bannermenu": "https://italiafilm.network/wp-content/uploads/2018/06/ITALIAFILM-HD.png",
"categories": ["movie"],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Includi in ricerca globale",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Includi in Novità - Film",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_italiano",
"type": "bool",
"label": "Includi in Novità - Italiano",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "checklinks",
"type": "bool",
"label": "Verifica se i link esistono",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "checklinks_number",
"type": "list",
"label": "Numero di link da verificare",
"default": 1,
"enabled": true,
"visible": "eq(-1,true)",
"lvalues": [ "2", "5", "10", "15" ]
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostra link in lingua...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": ["Non filtrare","IT"]
}
]
}

View File

@@ -1,364 +0,0 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Ringraziamo Icarus crew
# Canale per italiafilmhd
# ----------------------------------------------------------
import re
import urlparse
from core import scrapertools, servertools, httptools, tmdb, support
from core.item import Item
from platformcode import logger, config
from specials import autoplay
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
list_servers = ['verystream', 'openload', 'youtube']
list_quality = ['default']
checklinks = config.get_setting('checklinks', 'italiafilmhd')
checklinks_number = config.get_setting('checklinks_number', 'italiafilmhd')
__channel__ = 'italiafilmhd'
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
def mainlist(item):
logger.info("kod.italiafilmhd mainlist")
autoplay.init(item.channel, list_servers, list_quality)
itemlist = [
Item(channel=item.channel,
title="[COLOR azure]Novita'[/COLOR]",
action="fichas",
url=host + "/cinema/",
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
Item(channel=item.channel,
title="[COLOR azure]Ultimi Film Inseriti[/COLOR]",
action="fichas",
url=host + "/film/",
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
Item(channel=item.channel,
title="[COLOR azure]Film per Genere[/COLOR]",
action="genere",
url=host,
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
Item(channel=item.channel,
title="Serie TV",
text_color="azure",
action="tv_series",
url="%s/serie-tv-hd/" % host,
thumbnail="http://orig03.deviantart.net/6889/f/2014/079/7/b/movies_and_popcorn_folder_icon_by_matheusgrilo-d7ay4tw.png"),
Item(channel=item.channel,
title="[COLOR orange]Cerca...[/COLOR]",
action="search",
extra="movie",
thumbnail="http://dc467.4shared.com/img/fEbJqOum/s7/13feaf0c8c0/Search")]
autoplay.show_option(item.channel, itemlist)
return itemlist
def newest(categoria):
logger.info("[italiafilmvideohd.py] newest" + categoria)
itemlist = []
item = Item()
try:
if categoria == "film":
item.url = host + "/cinema/"
item.action = "fichas"
itemlist = fichas(item)
if itemlist[-1].action == "fichas":
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def search(item, texto):
logger.info("[italiafilmvideohd.py] " + item.url + " search " + texto)
item.url = host + "/?s=" + texto
try:
return fichas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def genere(item):
logger.info("[italiafilmvideohd.py] genere")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
patron = '<div class="sub_title">Genere</div>(.+?)</div>'
data = scrapertools.find_single_match(data, patron)
patron = '<li>.*?'
patron += 'href="([^"]+)".*?'
patron += '<i>([^"]+)</i>'
matches = re.compile(patron, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
for scrapedurl, scrapedtitle in matches:
scrapedtitle = scrapedtitle.replace('&amp;', '-')
itemlist.append(
Item(channel=item.channel,
action="fichas",
title=scrapedtitle,
url=scrapedurl,
folder=True))
return itemlist
def fichas(item):
logger.info("[italiafilmvideohd.py] fichas")
itemlist = []
# Carica la pagina
data = httptools.downloadpage(item.url, headers=headers).data
# fix - calidad
patron = '<li class="item">.*?'
patron += 'href="([^"]+)".*?'
patron += 'title="([^"]+)".*?'
patron += '<img src="([^"]+)".*?'
matches = re.compile(patron, re.DOTALL).findall(data)
for scraped_2, scrapedtitle, scrapedthumbnail in matches:
scrapedurl = scraped_2
title = scrapertools.decodeHtmlentities(scrapedtitle)
# title += " (" + scrapedcalidad + ")
# ------------------------------------------------
scrapedthumbnail = httptools.get_url_headers(scrapedthumbnail)
# ------------------------------------------------
itemlist.append(
Item(channel=item.channel,
action="findvideos",
contentType="movie",
title=title,
url=scrapedurl,
thumbnail=scrapedthumbnail,
fulltitle=title,
show=scrapedtitle))
# Paginación
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)"\s*><span aria-hidden="true">&raquo;')
if next_page != "":
itemlist.append(
Item(channel=item.channel,
action="fichas",
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
url=next_page,
text_color="orange",
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png"))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
def tv_series(item):
logger.info("[italiafilmvideohd.py] tv_series")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
blocco = scrapertools.find_single_match(data, r'<ul class="list_mt">(.*?)</ul>')
patron = r'<a class="poster" href="([^"]+)" title="([^"]+)"[^>]*>\s*<img src="([^"]+)"[^>]+>'
matches = re.findall(patron, blocco, re.DOTALL)
for scrapedurl, scrapedtitle, scrapedthumbnail in matches:
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle).strip()
itemlist.append(
Item(channel=item.channel,
action="seasons",
contentType="tv",
title=scrapedtitle,
text_color="azure",
url=scrapedurl,
thumbnail=scrapedthumbnail,
fulltitle=scrapedtitle,
show=scrapedtitle))
# Pagine
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)"\s*><span aria-hidden="true">&raquo;')
if next_page != "":
itemlist.append(
Item(channel=item.channel,
action="fichas",
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
text_color="orange",
url=next_page,
thumbnail="http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png"))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
def seasons(item):
logger.info("[italiafilmvideohd.py] seasons")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
url = scrapertools.find_single_match(data,
r'<div class="playerhdpass" id="playerhdpass">\s*[^>]+>\s*<iframe[^s]+src="([^"]+)"[^>]*></iframe>')
data = httptools.downloadpage(url, headers=headers).data
blocco = scrapertools.find_single_match(data, r'<h3>STAGIONE</h3>\s*<ul>(.*?)</ul>')
seasons = re.findall(r'<li[^>]*><a href="([^"]+)">([^<]+)</a></li>', blocco, re.DOTALL)
for scrapedurl, season in seasons:
itemlist.append(
Item(channel=item.channel,
action="episodes",
contentType=item.contentType,
title="Stagione: %s" % season,
text_color="azure",
url="https://hdpass.net/%s" % scrapedurl,
thumbnail=item.thumbnail,
fulltitle=item.fulltitle,
show=item.show))
return itemlist
def episodes(item):
logger.info("[italiafilmvideohd.py] episodes")
itemlist = []
data = httptools.downloadpage(item.url, headers=headers).data
blocco = scrapertools.find_single_match(data, r'<section id="seasons">(.*?)</section>')
episodes = re.findall(r'<li[^>]*><a href="([^"]+)">([^<]+)</a></li>', blocco, re.DOTALL)
for scrapedurl, episode in episodes:
itemlist.append(
Item(channel=item.channel,
action="findvid_series",
contentType=item.contentType,
title="Episodio: %s" % episode,
text_color="azure",
url="https://hdpass.net/%s" % scrapedurl,
thumbnail=item.thumbnail,
fulltitle=item.fulltitle,
show=item.show))
return itemlist
def findvideos(item):
logger.info("[italiafilmvideohd.py] findvideos")
itemlist = []
# Carica la pagina
data = httptools.downloadpage(item.url, headers=headers).data
patron = r'<div class="playerhdpass" id="playerhdpass"><iframe width=".+?" height=".+?" src="([^"]+)"'
url = scrapertools.find_single_match(data, patron)
if url:
data += httptools.downloadpage(url, headers=headers).data
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
videoitem.title = item.title + videoitem.title
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videoitem.show = item.show
videoitem.plot = item.plot
videoitem.channel = item.channel
videoitem.contentType = item.contentType
videoitem.language = IDIOMAS['Italiano']
# Requerido para Filtrar enlaces
if checklinks:
itemlist = servertools.check_list_links(itemlist, checklinks_number)
# Requerido para FilterTools
# itemlist = filtertools.get_links(itemlist, item, list_language)
# Requerido para AutoPlay
autoplay.start(itemlist, item)
if item.contentType != 'episode':
if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'findvideos':
itemlist.append(
Item(channel=item.channel, title='[COLOR yellow][B]Aggiungi alla videoteca[/B][/COLOR]', url=item.url,
action="add_pelicula_to_library", extra="findvideos", contentTitle=item.contentTitle))
return itemlist
def findvid_series(item):
logger.info("[italiafilmvideohd.py] findvideos")
itemlist = []
# Carica la pagina
data = httptools.downloadpage(item.url, headers=headers).data.replace('\n', '')
patron = r'<iframe id="[^"]+" width="[^"]+" height="[^"]+" src="([^"]+)"[^>]+><\/iframe>'
url = scrapertools.find_single_match(data, patron).replace("?alta", "")
url = "https:" + url.replace("&download=1", "")
data = httptools.downloadpage(url, headers=headers).data
start = data.find('<div class="row mobileRes">')
end = data.find('<div id="playerFront">', start)
data = data[start:end]
patron_res = '<div class="row mobileRes">(.*?)</div>'
patron_mir = '<div class="row mobileMirrs">(.*?)</div>'
patron_media = r'<input type="hidden" name="urlEmbed" data-mirror="([^"]+)" id="urlEmbed" value="([^"]+)"\s*/>'
res = scrapertools.find_single_match(data, patron_res)
urls = []
for res_url, res_video in scrapertools.find_multiple_matches(res, '<option.*?value="([^"]+?)">([^<]+?)</option>'):
data = httptools.downloadpage(urlparse.urljoin(url, res_url), headers=headers).data.replace('\n', '')
mir = scrapertools.find_single_match(data, patron_mir)
for mir_url in scrapertools.find_multiple_matches(mir, '<option.*?value="([^"]+?)">[^<]+?</value>'):
data = httptools.downloadpage(urlparse.urljoin(url, mir_url), headers=headers).data.replace('\n', '')
for media_label, media_url in re.compile(patron_media).findall(data):
urls.append(support.url_decode(media_url))
itemlist = servertools.find_video_items(data='\n'.join(urls))
for videoitem in itemlist:
server = re.sub(r'[-\[\]\s]+', '', videoitem.title)
videoitem.text_color = "azure"
videoitem.title = "".join(["[%s] " % ("[COLOR orange]%s[/COLOR]" % server.capitalize()), item.title])
videoitem.fulltitle = item.fulltitle
videoitem.show = item.show
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
return itemlist

View File

@@ -6,6 +6,7 @@
"language": ["ita","sub-ita"],
"thumbnail": "https:\/\/raw.githubusercontent.com\/Zanzibar82\/images\/master\/posters\/italiaserie.png",
"bannermenu": "https:\/\/raw.githubusercontent.com\/Zanzibar82\/images\/master\/posters\/italiaserie.png",
"categories": ["tvshow"],
"categories": ["tvshow", "vos"],
"not_active": ["include_in_newest_peliculas", "include_in_newest_anime"],
"settings": []
}

View File

@@ -4,15 +4,24 @@
# ------------------------------------------------------------
"""
"""
import re
Problemi noti che non superano il test del canale:
from core import httptools, scrapertools, support
Avvisi:
Ulteriori info:
"""
import re
from core import support, httptools, scrapertoolsV2
from core.item import Item
from platformcode import config
__channel__ = 'italiaserie'
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
list_servers = ['speedvideo']
list_quality = []
@@ -23,7 +32,7 @@ def mainlist(item):
support.log()
tvshow = ['/category/serie-tv/',
('Aggiornamenti', ['/ultimi-episodi/', 'peliculas', 'latest']),
('Aggiornamenti', ['/ultimi-episodi/', 'peliculas', 'update']),
('Generi', ['', 'category', 'Serie-Tv per Genere'])
]
@@ -38,14 +47,31 @@ def peliculas(item):
patron = r'<div class="post-thumb">\s*<a href="(?P<url>[^"]+)" '\
'title="(?P<title>[^"]+)">\s*<img src="(?P<thumb>[^"]+)"[^>]+>'
if item.args == "latest":
if item.args == 'update':
patron += r'.*?aj-eps">(?P<episode>.+?)[ ]?(?P<lang>Sub-Ita|Ita)</span>'
action = 'findvideos'
patronNext = r'<a class="next page-numbers" href="(.*?)">'
## debug = True
return locals()
@support.scrape
def episodios(item):
support.log()
patronBlock = r'</i> Stagione (?P<block>(?P<season>\d+)</div> '\
'<div class="su-spoiler-content".*?)<div class="clearfix">'
patron = r'(?:(?P<season>\d+)?</div> <div class="su-spoiler-content"(:?.+?)?> )?'\
'<div class="su-link-ep">\s+<a.*?href="(?P<url>[^"]+)".*?strong>[ ]'\
'(?P<title>.+?)[ ](?P<episode>\d+-\d+|\d+)[ ](?:-\s+(?P<title2>.+?))?'\
'[ ]?(?:(?P<lang>Sub-ITA))?[ ]?</strong>'
#debug = True
return locals()
@support.scrape
def category(item):
support.log()
@@ -56,24 +82,10 @@ def category(item):
return locals()
@support.scrape
def episodios(item):
support.log()
pagination = 24
patronBlock = r'</i> Stagione (?P<block>(?P<season>\d+)</div> '\
'<div class="su-spoiler-content".*?)<div class="clearfix">'
patron = r'(?:(?P<season>\d+)?</div> <div class="su-spoiler-content"(:?.+?)?> )?'\
'<div class="su-link-ep">\s+<a.*?href="(?P<url>[^"]+)".*?strong>[ ]'\
'(?P<title>.+?)[ ](?P<episode>\d+-\d+|\d+)[ ](?:-\s+(?P<title2>.+?))?'\
'[ ]?(?:(?P<lang>Sub-ITA))?[ ]?</strong>'
return locals()
def search(item, texto):
support.log("s=", texto)
item.url = host + "/?s=" + texto
item.contentType = 'tvshow'
try:
return peliculas(item)
# Continua la ricerca in caso di errore
@@ -92,7 +104,7 @@ def newest(categoria):
if categoria == "series":
item.url = host + "/ultimi-episodi/"
item.action = "peliculas"
item.args = "latest"
item.args = "update"
item.contentType = "episode"
itemlist = peliculas(item)
@@ -111,4 +123,33 @@ def newest(categoria):
def findvideos(item):
support.log()
return support.server(item, data=item.url)
if item.args == 'update':
itemlist = []
item.infoLabels['mediatype'] = 'episode'
data = httptools.downloadpage(item.url, headers=headers).data
data = re.sub('\n|\t', ' ', data)
data = re.sub(r'>\s+<', '> <', data)
url_video = scrapertoolsV2.find_single_match(data, r'<a rel="[^"]+" target="[^"]+" act="[^"]+"\s+href="([^"]+)" class="[^"]+-link".+?\d+.+?</strong> </a>', -1)
url_serie = scrapertoolsV2.find_single_match(data, r'<link rel="canonical" href="([^"]+)" />')
goseries = support.typo("Vai alla Serie:", ' bold')
series = support.typo(item.contentSerieName, ' bold color kod')
itemlist = support.server(item, data=url_video)
itemlist.append(
Item(channel=item.channel,
title=goseries + series,
fulltitle=item.fulltitle,
show=item.show,
contentType='tvshow',
contentSerieName=item.contentSerieName,
url=url_serie,
action='episodios',
contentTitle=item.contentSerieName,
plot = goseries + series + "con tutte le puntate",
))
return itemlist
else:
return support.server(item, data=item.url)

View File

@@ -4,8 +4,8 @@
"active": true,
"adult": false,
"language": ["ita"],
"thumbnail": "https:\/\/mondoserietv.com\/wp-content\/uploads\/2018\/04\/logo.png",
"bannermenu": "https:\/\/mondoserietv.com\/wp-content\/uploads\/2018\/04\/logo.png",
"thumbnail": "mondoserietv.png",
"bannermenu": "mondoserietv.png",
"categories": ["movie","anime","tvshow","documentary"],
"not_active":["include_in_newest_anime","include_in_newest_documentary"],
"settings": []

View File

@@ -7,23 +7,19 @@
from core import scrapertoolsV2, httptools, support
from core.item import Item
# impostati dinamicamente da findhost()
host = ''
headers = ''
def findhost():
global host, headers
data= httptools.downloadpage('https://seriehd.nuovo.link/').data
global host, headers
host = scrapertoolsV2.find_single_match(data, r'<div class="elementor-button-wrapper"> <a href="([^"]+)"')
headers = [['Referer', host]]
return host
findhost()
list_servers = ['verystream', 'openload', 'streamango', 'thevideome']
list_quality = ['1080p', '720p', '480p', '360']
@support.menu
def mainlist(item):
findhost()
@@ -33,33 +29,9 @@ def mainlist(item):
return locals()
def search(item, texto):
support.log(texto)
item.contentType = 'tvshow'
item.url = host + "/?s=" + texto
try:
return peliculas(item)
# Continua la ricerca in caso di errore .
except:
import sys
for line in sys.exc_info():
support.logger.error("%s" % line)
return []
@support.scrape
def genre(item):
patronMenu = '<a href="(?P<url>[^"]+)">(?P<title>[^<]+)</a>'
blacklist = ['Serie TV','Serie TV Americane','Serie TV Italiane','altadefinizione']
patronBlock = '<ul class="sub-menu">(?P<block>.*)</ul>'
action = 'peliculas'
return locals()
@support.scrape
def peliculas(item):
#findhost()
patron = r'<h2>(?P<title>.*?)</h2>\s*<img src="(?P<thumb>[^"]+)" alt="[^"]*" />\s*<A HREF="(?P<url>[^"]+)">.*?<span class="year">(?:(?P<year>[0-9]{4}))?.*?<span class="calidad">(?:(?P<quality>[A-Z]+))?.*?</span>'
patronNext=r'<span class="current">\d+</span><a rel="nofollow" class="page larger" href="([^"]+)">\d+</a>'
action='episodios'
@@ -68,6 +40,7 @@ def peliculas(item):
@support.scrape
def episodios(item):
#findhost()
data =''
url = support.match(item, patronBlock=r'<iframe width=".+?" height=".+?" src="([^"]+)" allowfullscreen frameborder="0">')[1]
seasons = support.match(item, r'<a href="([^"]+)">(\d+)<', r'<h3>STAGIONE</h3><ul>(.*?)</ul>', headers, url)[0]
@@ -82,8 +55,35 @@ def episodios(item):
action = 'findvideos'
return locals()
@support.scrape
def genre(item):
#findhost()
patronMenu = '<a href="(?P<url>[^"]+)">(?P<title>[^<]+)</a>'
blacklist = ['Serie TV','Serie TV Americane','Serie TV Italiane','altadefinizione']
patronBlock = '<ul class="sub-menu">(?P<block>.*)</ul>'
action = 'peliculas'
return locals()
def search(item, texto):
support.log(texto)
findhost()
item.contentType = 'tvshow'
item.url = host + "/?s=" + texto
try:
return peliculas(item)
# Continua la ricerca in caso di errore .
except:
import sys
for line in sys.exc_info():
support.logger.error("%s" % line)
return []
def newest(categoria):
support.log(categoria)
findhost()
itemlist = []
item = support.Item()
try:

View File

@@ -4,33 +4,9 @@
"active": true,
"adult": false,
"language": ["ita"],
"thumbnail": "https:\/\/serietvonline.com\/wp-content\/uploads\/2016\/08\/logo2016-1.png",
"bannermenu": "https:\/\/serietvonline.com\/wp-content\/uploads\/2016\/08\/logo2016-1.png",
"categories": ["anime","tvshow","movie", "documentary"],
"settings": [
{
"id": "include_in_newest_series",
"type": "bool",
"label": "Includi in Novità - Serie TV",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "include_in_newest_anime",
"type": "bool",
"label": "Includi in Novità - Anime",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "include_in_newest_italiano",
"type": "bool",
"label": "Includi in Novità - Italiano",
"default": false,
"enabled": false,
"visible": false
}
]
"thumbnail": "serietvonline.png",
"bannermenu": "serietvonline.png",
"categories": ["anime","tvshow","movie","documentary"],
"not_active": ["include_in_newest_anime"],
"settings": []
}

View File

@@ -1,29 +1,35 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per serietvonline
# Canale per serietvonline.py
# ----------------------------------------------------------
"""
Problemi noti che non superano il test del canale:
- il solo film .45, nella lista titoli, ha come titolo nel canale '.'
- la ricerca dei titoli potrebbe non essere uguale ( da sistemare le regex )
indicate i titoli, con cui avete avuto problemi, e se sono film o serie.
Novità. Indicare in quale/i sezione/i è presente il canale:
- film, serie
Avvisi:
- Nelle pagine di liste avrete un elenco di 24 titoli per pagina,
invece della singola del sito
- Il Canale è incluso nella sola ricerca globale.
Le pagine di liste sono lente a caricarsi in quanto scaricano anche le info...
- Al massimo 25 titoli per le sezioni: Film
- Al massimo 35 titoli per le sezioni: Tutte le altre
Per aggiungere in videoteca le Anime:
Se hanno la forma 1x01:
-si posso aggiungere direttamente dalla pagina della serie, sulla voce in fondo "aggiungi in videoteca".
Altrimenti:
- Prima fare la 'Rinumerazione' dal menu contestuale dal titolo della serie
"""
from core import support
from platformcode import logger, config
import re
from core import support, httptools, scrapertoolsV2
from platformcode import config
from core.item import Item
__channel__ = "serietvonline"
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
host = ""
headers = ""
def findhost():
global host, headers
data = httptools.downloadpage('https://serietvonline.me/').data
host = scrapertoolsV2.find_single_match(data, r'<a class="pure-button pure-button-primary" title=\'serie tv online\' href="([^"]+)">')
headers = [['Referer', host]]
list_servers = ['akvideo', 'wstream', 'backin', 'vidtome', 'nowvideo']
list_quality = ['default']
@@ -32,13 +38,18 @@ list_quality = ['default']
@support.menu
def mainlist(item):
support.log()
findhost()
film = ['/lista-film/',
('Ultimi Aggiunti', ['/ultimi-film-aggiunti/', 'peliculas', 'latest'])
film = ['/ultimi-film-aggiunti/',
('Lista', ['/lista-film/', 'peliculas', 'lista'])
]
tvshow = ['',
('HD', ['/lista-serie-tv-in-altadefinizione/', 'peliculas', 'hd'])
('Aggiornamenti', ['/ultimi-episodi-aggiunti/', 'peliculas', 'update']),
('Tutte', ['/lista-serie-tv/', 'peliculas', 'qualcosa']),
('Italiane', ['/lista-serie-tv-italiane/', 'peliculas', 'qualcosa']),
('Anni 50-60-70-80', ['/lista-serie-tv-anni-60-70-80/', 'peliculas', 'qualcosa']),
('HD', ['/lista-serie-tv-in-altadefinizione/', 'peliculas', 'qualcosa'])
]
anime = ['/lista-cartoni-animati-e-anime/']
@@ -52,108 +63,142 @@ def mainlist(item):
@support.scrape
def peliculas(item):
support.log()
#findhost()
blacklist = ['DMCA', 'Contatti', 'Attenzione NON FARTI OSCURARE', 'Lista Ccartoni Animati e Anime']
if item.action == 'search':
patronBlock = r'>Lista Serie Tv</a></li></ul></div><div id="box_movies">'\
'(?P<block>.*?)<div id="paginador">'
patron = r'<div class="movie"><div class="imagen"> <img src="(?P<thumb>[^"]+)" '\
'alt="(?:(?P<title>.+?)[ ]?(?:\d+)?)?" /> <a href="(?P<url>[^"]+)">'\
'.+?<h2>(?:.+?(?P<year>\d+)?)</h2>(?: <span class="year">'\
'(\d+)(?:|–|-\d+)?</span>)?</div>'
def itemHook(item):
support.log("ITEMHOOK PRIMA: ", item)
if 'film' in item.url:
item.action = 'findvideos'
item.contentType = 'movie'
item.infoLabels['mediatype'] = 'movie'
else:
item.action = 'episodios'
item.contentType = 'tvshow'
item.infoLabels['mediatype'] = 'tvshow'
support.log("ITEMHOOK DOPO: ", item)
return item
elif item.extra == 'tvshow' or item.contentType == 'tvshow':
# SEZIONE Serie TV- Anime!
action = 'episodios'
if 'anime' in item.url:
patronBlock = r'<h1>Lista Cartoni Animati e Anime</h1>(?P<block>.*?)<div class="footer_c">'
patron = r'<a href="(?P<url>[^"]+)" title="(?P<title>[^"]+)">.+?</a>'
else:
if item.args == 'hd':
patronBlock = r'<h1>Lista Serie Tv in AltaDefinizione</h1>(?P<block>.*?)'\
'<div class="footer_c">'
patron = r'<a href="(?P<url>[^"]+)" title="(?P<title>[^"]+)">.+?</a>'
elif item.args == 'doc':
patronBlock = r'<h1>Lista Documentari</h1>(?P<block>.*?)<div class="footer_c">'
patron = r'<a href="(?P<url>[^"]+)" title="(?P<title>[^"]+)">.+?</a>'
else:
patronBlock = r'<div id="box_movies">(?P<block>.*?)<div id="paginador">'
patron = r'<div class="movie">[^>]+>.+?src="(?P<thumb>[^"]+)" alt="[^"]+"'\
'.+?href="(?P<url>[^"]+)">[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[ ]'\
'(?P<rating>\d+.\d+|\d+)<[^>]+>[^>]+><h2>(?P<title>[^"]+)</h2>[ ]?'\
'(?:<span class="year">(?P<year>\d+|\-\d+))?<'
else:
# SEZIONE FILM
action = 'findvideos'
pagination = 24
if not item.args:
patron = r'href="(?P<url>[^"]+)"[^>]+>(?P<title>.*?)[ ]?(?P<year>\d+)?'\
'(?: Streaming | MD iSTANCE )?<'
patronBlock = r'Lista dei film disponibili in streaming e anche in download\.'\
'</p>(?P<block>.*?)<div class="footer_c">'
elif item.args == 'latest':
patronBlock = r'<h1>Ultimi film aggiunti</h1>(?P<block>.*?)<div class="footer_c">'
patron = r'<tr><td><a href="(?P<url>[^"]+)"(?:|.+?)?>(?:&nbsp;&nbsp;)?[ ]?'\
'(?P<title>.*?)[ ]?(?:HD)?[ ]?(?P<year>\d+)?'\
'(?: | HD | Streaming | MD(?: iSTANCE)? )?</a>'
blacklist = ['DMCA', 'Contatti', 'Attenzione NON FARTI OSCURARE', 'Lista Cartoni Animati e Anime']
patronBlock = r'<h1>.+?</h1>(?P<block>.*?)<div class="footer_c">'
patronNext = r'<div class="siguiente"><a href="([^"]+)" >'
## debug = True
if item.args == 'search':
patronBlock = r'>Lista Serie Tv</a></li></ul></div><div id="box_movies">(?P<block>.*?)<div id="paginador">'
patron = r'<div class="movie">[^>]+[^>]+>\s?<img src="(?P<thumb>[^"]+)" alt="(?P<title>.+?)\s?(?P<year>[\d\-]+)?"[^>]+>\s?<a href="(?P<url>[^"]+)">'
elif item.contentType == 'episode':
pagination = 35
action = 'findvideos'
patron = r'<td><a href="(?P<url>[^"]+)"(?:[^>]+)?>\s?(?P<title>[^<]+)(?P<episode>[\d\-x]+)?(?P<title2>[^<]+)?<'
elif item.contentType == 'tvshow':
# SEZIONE Serie TV- Anime - Documentari
pagination = 35
if not item.args and 'anime' not in item.url:
patron = r'<div class="movie">[^>]+>.+?src="(?P<thumb>[^"]+)" alt="[^"]+".+?href="(?P<url>[^"]+)">[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>[ ](?P<rating>\d+.\d+|\d+)<[^>]+>[^>]+><h2>(?P<title>[^"]+)</h2>[ ]?(?:<span class="year">(?P<year>\d+|\-\d+))?<'
else:
anime = True
patron = r'(?:<td>)?<a href="(?P<url>[^"]+)"(?:[^>]+)?>\s?(?P<title>[^<]+)(?P<episode>[\d\-x]+)?(?P<title2>[^<]+)?<'
else:
# SEZIONE FILM
pagination = 25
if item.args == 'lista':
patron = r'href="(?P<url>[^"]+)"[^>]+>(?P<title>.*?)[ ]?(?P<year>\d+)?(?: Streaming | MD iSTANCE )?<'
patronBlock = r'Lista dei film disponibili in streaming e anche in download\.</p>(?P<block>.*?)<div class="footer_c">'
else:
#patronBlock = r'<h1>Ultimi film aggiunti</h1>(?P<block>.*?)<div class="footer_c">'
patron = r'<tr><td><a href="(?P<url>[^"]+)"(?:|.+?)?>(?:&nbsp;&nbsp;)?[ ]?(?P<title>.*?)[ ]?(?P<quality>HD)?[ ]?(?P<year>\d+)?(?: | HD | Streaming | MD(?: iSTANCE)? )?</a>'
def itemHook(item):
if 'film' in item.url:
item.action = 'findvideos'
item.contentType = 'movie'
elif item.args == 'update':
pass
else:
item.contentType = 'tvshow'
item.action = 'episodios'
return item
#support.regexDbg(item, patronBlock, headers)
#debug = True
return locals()
@support.scrape
def episodios(item):
support.log()
#findhost()
action = 'findvideos'
patronBlock = r'<table>(?P<block>.*?)<\/table>'
patron = r'<tr><td>(?:[^<]+)[ ](?:Parte)?(?P<episode>\d+x\d+|\d+)(?:|[ ]?(?P<title2>.+?)?(?:avi)?)<(?P<url>.*?)</td><tr>'
patron = r'<tr><td>(?:[^<]+)[ ](?:Parte)?(?P<episode>\d+x\d+|\d+)(?:|[ ]?'\
'(?P<title2>.+?)?)<(?P<url>.*?)</td><tr>'
## debug = True
#debug = True
return locals()
def search(item, text):
support.log("CERCA :" ,text, item)
findhost()
item.url = "%s/?s=%s" % (host, text)
try:
item.args = 'search'
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.log("%s" % line)
return []
def newest(categoria):
support.log(categoria)
findhost()
itemlist = []
item = Item()
if categoria == 'peliculas':
item.contentType = 'movie'
item.url = host + '/ultimi-film-aggiunti/'
elif categoria == 'series':
item.args = 'update'
item.contentType = 'episode'
item.url = host +'/ultimi-episodi-aggiunti/'
try:
item.action = 'peliculas'
itemlist = peliculas(item)
except:
import sys
for line in sys.exc_info():
support.log("{0}".format(line))
return []
return itemlist
def findvideos(item):
support.log()
if item.contentType == 'movie':
return support.server(item, headers=headers)
else:
return support.server(item, item.url)
if item.args != 'update':
return support.server(item, item.url)
else:
itemlist = []
item.infoLabels['mediatype'] = 'episode'
def search(item, texto):
support.log("CERCA :" ,texto, item)
item.url = "%s/?s=%s" % (host, texto)
data = httptools.downloadpage(item.url, headers=headers).data
data = re.sub('\n|\t', ' ', data)
data = re.sub(r'>\s+<', '> <', data)
#support.log("DATA - HTML:\n", data)
url_video = scrapertoolsV2.find_single_match(data, r'<tr><td>(.+?)</td><tr>', -1)
url_serie = scrapertoolsV2.find_single_match(data, r'<link rel="canonical" href="([^"]+)"\s?/>')
goseries = support.typo("Vai alla Serie:", ' bold')
series = support.typo(item.contentSerieName, ' bold color kod')
itemlist = support.server(item, data=url_video)
try:
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
itemlist.append(
Item(channel=item.channel,
title=goseries + series,
fulltitle=item.fulltitle,
show=item.show,
contentType='tvshow',
contentSerieName=item.contentSerieName,
url=url_serie,
action='episodios',
contentTitle=item.contentSerieName,
plot = goseries + series + "con tutte le puntate",
))
return itemlist

View File

@@ -3,68 +3,10 @@
"name": "SerieTVU",
"active": true,
"adult": false,
"language": ["ita"],
"language": ["ita", "sub-ita"],
"thumbnail": "serietvu.png",
"banner": "serietvu.png",
"categories": ["tvshow"],
"settings": [
{
"id": "channel_host",
"type": "text",
"label": "Host del canale",
"default": "https://www.serietvu.club",
"enabled": true,
"visible": true
},
{
"id": "include_in_global_search",
"type": "bool",
"label": "Includi ricerca globale",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_series",
"type": "bool",
"label": "Includi in Novità - Serie TV",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_italiano",
"type": "bool",
"label": "Includi in Novità - Italiano",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "checklinks",
"type": "bool",
"label": "Verifica se i link esistono",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "checklinks_number",
"type": "list",
"label": "Numero de link da verificare",
"default": 1,
"enabled": true,
"visible": "eq(-1,true)",
"lvalues": [ "1", "3", "5", "10" ]
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostra link in lingua...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": ["Non filtrare","IT"]
}
]
"categories": ["tvshow", "vos"],
"not_active": ["include_in_newest_peliculas", "include_in_newest_anime"],
"settings": []
}

View File

@@ -1,28 +1,22 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per SerieTVU
# Thanks to Icarus crew & Alfa addon & 4l3x87
# Canale per serietvu.py
# ----------------------------------------------------------
"""
Trasformate le sole def per support.menu e support.scrape
da non inviare nel test.
Test solo a trasformazione completa
La pagina novità contiene al max 25 titoli
"""
import re
from core import tmdb, scrapertools, support
from core import support, httptools, scrapertoolsV2
from core.item import Item
from core.support import log
from platformcode import config, logger
from platformcode import config
__channel__ = 'serietvu'
host = config.get_channel_url(__channel__)
headers = [['Referer', host]]
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
list_servers = ['speedvideo']
list_quality = ['default']
@@ -30,235 +24,74 @@ list_quality = ['default']
@support.menu
def mainlist(item):
log()
tvshow = ['/category/serie-tv',
('Novità', ['/ultimi-episodi', 'latestep']),
('Categorie', ['', 'categorie'])
('Novità', ['/ultimi-episodi/', 'peliculas', 'update']),
('Generi', ['', 'genres', 'genres'])
]
return locals()
# ----------------------------------------------------------------------------------------------------------------
def cleantitle(scrapedtitle):
log()
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle.strip())
scrapedtitle = scrapedtitle.replace('[HD]', '').replace('', '\'').replace(' Il Trono di Spade', '').replace(
'Flash 2014', 'Flash').replace('"', "'")
year = scrapertools.find_single_match(scrapedtitle, '\((\d{4})\)')
if year:
scrapedtitle = scrapedtitle.replace('(' + year + ')', '')
return scrapedtitle.strip()
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
@support.scrape
def peliculas(item):
log()
itemlist = []
patron = r'<div class="item">\s*<a href="([^"]+)" data-original="([^"]+)" class="lazy inner">'
patron += r'[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<'
matches, data = support.match(item, patron, headers=headers)
patronBlock = r'<div class="wrap">\s+<h.>.*?</h.>(?P<block>.*?)<footer>'
for scrapedurl, scrapedimg, scrapedtitle in matches:
infoLabels = {}
year = scrapertools.find_single_match(scrapedtitle, '\((\d{4})\)')
if year:
infoLabels['year'] = year
scrapedtitle = cleantitle(scrapedtitle)
if item.args != 'update':
action = 'episodios'
patron = r'<div class="item">\s*<a href="(?P<url>[^"]+)" data-original="(?P<thumb>[^"]+)" class="lazy inner">[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>[^<]+)<'
else:
action = 'findvideos'
patron = r'<div class="item">\s+?<a href="(?P<url>[^"]+)"\s+?data-original="(?P<thumb>[^"]+)"[^>]+>[^>]+>[^>]+>[^>]+>[^>]+>(?P<title>.+?)<[^>]+>\((?P<episode>[\dx\-]+)\s+?(?P<lang>Sub-Ita|[iITtAa]+)\)<'
pagination = 25
itemlist.append(
Item(channel=item.channel,
action="episodios",
title=scrapedtitle,
fulltitle=scrapedtitle,
url=scrapedurl,
thumbnail=scrapedimg,
show=scrapedtitle,
infoLabels=infoLabels,
contentType='episode',
folder=True))
patronNext = r'<li><a href="([^"]+)"\s+?>Pagina successiva'
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
#support.regexDbg(item, patron, headers)
#debug = True
return locals()
# Pagine
support.nextPage(itemlist, item, data, '<li><a href="([^"]+)">Pagina successiva')
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
@support.scrape
def episodios(item):
log()
itemlist = []
patron = r'<option value="(\d+)"[\sselected]*>.*?</option>'
matches, data = support.match(item, patron, headers=headers)
for value in matches:
patron = r'<div class="list [active]*" data-id="%s">(.*?)</div>\s*</div>' % value
blocco = scrapertools.find_single_match(data, patron)
log(blocco)
patron = r'(<a data-id="\d+[^"]*" data-href="([^"]+)"(?:\sdata-original="([^"]+)")?\sclass="[^"]+">)[^>]+>[^>]+>([^<]+)<'
matches = scrapertools.find_multiple_matches(blocco, patron)
for scrapedextra, scrapedurl, scrapedimg, scrapedtitle in matches:
contentlanguage = ''
if 'sub-ita' in scrapedtitle.lower():
contentlanguage = 'Sub-ITA'
scrapedtitle = scrapedtitle.replace(contentlanguage, '')
number = cleantitle(scrapedtitle.replace("Episodio", "")).strip()
title = value + "x" + number.zfill(2)
title += " "+support.typo(contentlanguage, '_ [] color kod') if contentlanguage else ''
infoLabels = {}
infoLabels['episode'] = number.zfill(2)
infoLabels['season'] = value
itemlist.append(
Item(channel=item.channel,
action="findvideos",
title=title,
fulltitle=scrapedtitle,
contentType="episode",
url=scrapedurl,
thumbnail=scrapedimg,
extra=scrapedextra,
infoLabels=infoLabels,
folder=True))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
support.videolibrary(itemlist, item, 'bold color kod')
return itemlist
patronBlock = r'</select><div style="clear:both"></div></h2>(?P<block>.*?)<div id="trailer" class="tab">'
patron = r'(?:<div class="list (?:active)?" data-id="(?P<season>\d+)">[^>]+>)?\s+<a data-id="(?P<episode>\d+)(?:[ ](?P<lang>[SuUbBiItTaA\-]+))?"(?P<url>[^>]+)>[^>]+>[^>]+>(?P<title>.+?)(?:\sSub-ITA)?<'
#support.regexDbg(item, patronBlock, headers)
#debug = True
return locals()
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def findvideos(item):
log()
return support.server(item, data=item.url)
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def findepisodevideo(item):
@support.scrape
def genres(item):
log()
patron_block = r'<div class="list [active]*" data-id="%s">(.*?)</div>\s*</div>' % item.extra[0][0]
patron = r'<a data-id="%s[^"]*" data-href="([^"]+)"(?:\sdata-original="[^"]+")?\sclass="[^"]+">' % item.extra[0][1].lstrip("0")
matches = support.match(item, patron, patron_block, headers)[0]
data = ''
if len(matches) > 0:
data = matches[0]
item.contentType = 'movie'
return support.server(item, data=data)
blacklist = ["Home Page", "Calendario Aggiornamenti"]
action = 'peliculas'
patronBlock = r'<h2>Sfoglia</h2>\s*<ul>(?P<block>.*?)</ul>\s*</section>'
patron = r'<li><a href="(?P<url>[^"]+)">(?P<title>[^<]+)</a></li>'
#debug = True
return locals()
# ================================================================================================================
def search(item, text):
log(text)
item.url = host + "/?s=" + text
try:
item.contentType = 'tvshow'
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
log("%s" % line)
return []
# ----------------------------------------------------------------------------------------------------------------
def latestep(item):
log()
itemlist = []
titles = []
patron_block = r"Ultimi episodi aggiunti.*?<h2>"
patron = r'<a href="([^"]*)"\sdata-src="([^"]*)"\sclass="owl-lazy.*?".*?class="title">(.*?)<small>\((\d*?)x(\d*?)\s(Sub-Ita|Ita)'
matches = support.match(item, patron, patron_block, headers, host)[0]
for scrapedurl, scrapedimg, scrapedtitle, scrapedseason, scrapedepisode, scrapedlanguage in matches:
infoLabels = {}
year = scrapertools.find_single_match(scrapedtitle, '\((\d{4})\)')
if year:
infoLabels['year'] = year
infoLabels['episode'] = scrapedepisode
infoLabels['season'] = scrapedseason
episode = scrapedseason + "x" + scrapedepisode
scrapedtitle = cleantitle(scrapedtitle)
title = scrapedtitle + " - " + episode
contentlanguage = ""
if scrapedlanguage.strip().lower() != 'ita':
title += " "+support.typo("Sub-ITA", '_ [] color kod')
contentlanguage = 'Sub-ITA'
titles.append(title)
itemlist.append(
Item(channel=item.channel,
action="findepisodevideo",
title=title,
fulltitle=title,
url=scrapedurl,
extra=[[scrapedseason, scrapedepisode]],
thumbnail=scrapedimg,
contentSerieName=scrapedtitle,
contentLanguage=contentlanguage,
contentType='episode',
infoLabels=infoLabels,
folder=True))
patron = r'<div class="item">\s*<a href="([^"]+)" data-original="([^"]+)" class="lazy inner">'
patron += r'[^>]+>[^>]+>[^>]+>[^>]+>([^<]+)<small>([^<]+)<'
matches = support.match(item, patron, headers=headers)[0]
for scrapedurl, scrapedimg, scrapedtitle, scrapedinfo in matches:
infoLabels = {}
year = scrapertools.find_single_match(scrapedtitle, '\((\d{4})\)')
if year:
infoLabels['year'] = year
scrapedtitle = cleantitle(scrapedtitle)
infoLabels['tvshowtitle'] = scrapedtitle
episodio = re.compile(r'(\d+)x(\d+)', re.DOTALL).findall(scrapedinfo)
infoLabels['episode'] = episodio[0][1]
infoLabels['season'] = episodio[0][0]
episode = infoLabels['season'] + "x" + infoLabels['episode']
title = "%s - %s" % (scrapedtitle, episode)
title = title.strip()
contentlanguage = ""
if 'sub-ita' in scrapedinfo.lower():
title += " "+support.typo("Sub-ITA", '_ [] color kod')
contentlanguage = 'Sub-ITA'
if title in titles: continue
itemlist.append(
Item(channel=item.channel,
action="findepisodevideo",
title=title,
fulltitle=title,
url=scrapedurl,
extra=episodio,
thumbnail=scrapedimg,
contentSerieName=scrapedtitle,
contentLanguage=contentlanguage,
infoLabels=infoLabels,
contentType='episode',
folder=True))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def newest(categoria):
log(categoria)
itemlist = []
@@ -266,51 +99,52 @@ def newest(categoria):
try:
if categoria == "series":
item.url = host + "/ultimi-episodi"
item.action = "latestep"
itemlist = latestep(item)
if itemlist[-1].action == "latestep":
itemlist.pop()
item.action = "peliculas"
item.contentType = 'tvshow'
item.args = 'update'
itemlist = peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
log("{0}".format(line))
return []
return itemlist
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
def search(item, texto):
log(texto)
item.url = host + "/?s=" + texto
try:
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
# ================================================================================================================
# ----------------------------------------------------------------------------------------------------------------
@support.scrape
def categorie(item):
def findvideos(item):
log()
if item.args != 'update':
return support.server(item, data=item.url)
else:
itemlist = []
item.infoLabels['mediatype'] = 'episode'
blacklist = ["Home Page", "Calendario Aggiornamenti"]
action = 'peliculas'
patronBlock = r'<h2>Sfoglia</h2>\s*<ul>(?P<block>.*?)</ul>\s*</section>'
patron = r'<li><a href="(?P<url>[^"]+)">(?P<title>[^<]+)</a></li>'
debug = True
data = httptools.downloadpage(item.url, headers=headers).data
data = re.sub('\n|\t', ' ', data)
data = re.sub(r'>\s+<', '> <', data)
## support.log("DATA - HTML:\n", data)
url_video = scrapertoolsV2.find_single_match(data, r'<div class="item"> <a data-id="[^"]+" data-href="([^"]+)" data-original="[^"]+"[^>]+> <div> <div class="title">Episodio \d+', -1)
url_serie = scrapertoolsV2.find_single_match(data, r'<link rel="canonical" href="([^"]+)"\s?/>')
goseries = support.typo("Vai alla Serie:", ' bold')
series = support.typo(item.contentSerieName, ' bold color kod')
return locals()
# ================================================================================================================
itemlist = support.server(item, data=url_video)
itemlist.append(
Item(channel=item.channel,
title=goseries + series,
fulltitle=item.fulltitle,
show=item.show,
contentType='tvshow',
contentSerieName=item.contentSerieName,
url=url_serie,
action='episodios',
contentTitle=item.contentSerieName,
plot = goseries + series + "con tutte le puntate",
))
#support.regexDbg(item, patronBlock, headers)
return itemlist

View File

@@ -4,8 +4,8 @@
"language": ["ita"],
"active": true,
"adult": false,
"thumbnail": "https://www.popcornstream.best/wp-content/uploads/2019/09/PopLogo40.png",
"banner": "https://www.popcornstream.info/media/PopcornStream820x428.png",
"thumbnail": "popcornstream.png",
"banner": "popcornstream.png",
"categories": ["movie","tvshow","anime"],
"settings": []
}

View File

@@ -2,7 +2,7 @@
"id": "streamtime",
"name": "StreamTime",
"language": ["ita"],
"active": true,
"active": false,
"adult": false,
"thumbnail": "",
"banner": "streamtime.png",

View File

@@ -4,76 +4,9 @@
"language": ["ita"],
"active": true,
"adult": false,
"thumbnail": "https://raw.githubusercontent.com/Zanzibar82/images/master/posters/tantifilm.png",
"banner": "https://raw.githubusercontent.com/Zanzibar82/images/master/posters/tantifilm.png",
"categories": ["tvshow", "movie", "anime"],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Includi in ricerca globale",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Includi in Novità - Film",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_italiano",
"type": "bool",
"label": "Includi in Novità - Italiano",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "checklinks",
"type": "bool",
"label": "Verifica se i link esistono",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "checklinks_number",
"type": "list",
"label": "Numero di link da verificare",
"default": 1,
"enabled": true,
"visible": "eq(-1,true)",
"lvalues": [ "5", "10", "15", "20" ]
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostra link in lingua...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": ["Non filtrare","IT"]
},
{
"id": "autorenumber",
"type": "bool",
"label": "@70712",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "autorenumber_mode",
"type": "bool",
"label": "@70688",
"default": false,
"enabled": true,
"visible": "eq(-1,true)"
}
]
"thumbnail": "tantifilm.png",
"banner": "tantifilm.png",
"categories": ["tvshow", "movie", "anime"],
"not_active":["include_in_newest_anime"],
"settings": []
}

View File

@@ -7,7 +7,7 @@
"thumbnail": "toonitalia.png",
"banner": "toonitalia.png",
"categories": ["tvshow", "movie", "vos", "anime"],
"not_active":["include_in_newest_peliculas"],
"default_off":["include_in_newest"],
"not_active":["include_in_newest_peliculas", "include_in_newest_series"],
"default_off":["include_in_newest"],
"settings": []
}
}

View File

@@ -137,12 +137,15 @@ def episodios(item):
episodes.append(episode['episodes'])
for episode in episodes:
for key in episode:
if 'stagione' in key['title'].lower():
match = support.match(key['title'].encode('ascii', 'replace'), r'[Ss]tagione\s*(\d+) - [Ee]pisodio\s*(\d+)')[0][0]
if 'stagione' in key['title'].encode('utf8').lower():
match = support.match(key['title'].encode('utf8'), r'[Ss]tagione\s*(\d+) - [Ee]pisodio\s*(\d+)')[0][0]
title = match[0]+'x'+match[1] + ' - ' + item.fulltitle
make_item = True
elif int(key['season_id']) == int(season_id):
title = 'Episodio ' + key['number'] + ' - ' + key['title'].encode('ascii', 'replace'),
try:
title = 'Episodio ' + key['number'] + ' - ' + key['title'].encode('utf8')
except:
title = 'Episodio ' + key['number'] + ' - ' + key['title']
make_item = True
else:
make_item = False
@@ -193,21 +196,21 @@ def make_itemlist(itemlist, item, data):
search = item.search if item.search else ''
infoLabels = {}
for key in data['data']:
if search.lower() in key['title'].lower():
if search.lower() in key['title'].encode('utf8').lower():
infoLabels['year'] = key['date_published']
infoLabels['title'] = infoLabels['tvshowtitle'] = key['title']
support.log(infoLabels)
title = key['title'].encode('utf8')
itemlist.append(
Item(
channel = item.channel,
title = support.typo(key['title'], 'bold'),
fulltitle= key['title'],
show= key['title'],
title = support.typo(title, 'bold'),
fulltitle= title,
show= title,
url= host + str(key['show_id']) + '/seasons/',
action= 'findvideos' if item.contentType == 'movie' else 'episodios',
contentType = item.contentType,
contentSerieName= key['title'] if item.contentType != 'movie' else '',
contentTitle= key['title'] if item.contentType == 'movie' else '',
contentTitle= title if item.contentType == 'movie' else '',
infoLabels=infoLabels
))
return itemlist

View File

@@ -361,11 +361,11 @@ def thumb(itemlist=[], genre=False):
'channels_adventure': ['avventura', 'adventure'],
'channels_biographical':['biografico'],
'channels_comedy':['comico','commedia', 'demenziale', 'comedy'],
'channels_adult':['erotico', 'hentai'],
'channels_adult':['erotico', 'hentai', 'harem', 'ecchi'],
'channels_drama':['drammatico', 'drama'],
'channels_syfy':['fantascienza', 'science fiction'],
'channels_fantasy':['fantasy'],
'channels_crime':['gangster','poliziesco', 'crime'],
'channels_fantasy':['fantasy', 'magia'],
'channels_crime':['gangster','poliziesco', 'crime', 'crimine'],
'channels_grotesque':['grottesco'],
'channels_war':['guerra', 'war'],
'channels_children':['bambini', 'kids'],

View File

@@ -36,7 +36,7 @@ def get_channel_parameters(channel_name):
# logger.debug(channel_parameters)
if channel_parameters:
# cambios de nombres y valores por defecto
channel_parameters["title"] = channel_parameters.pop("name")
channel_parameters["title"] = channel_parameters.pop("name") + (' [DEPRECATED]' if channel_parameters.has_key('deprecated') and channel_parameters['deprecated'] else '')
channel_parameters["channel"] = channel_parameters.pop("id")
# si no existe el key se declaran valor por defecto para que no de fallos en las funciones que lo llaman
@@ -375,8 +375,6 @@ def set_channel_setting(name, value, channel):
except EnvironmentError:
logger.error("ERROR al leer el archivo: %s" % file_settings)
dict_settings[name] = value
# delete unused Settings
def_keys = []
del_keys = []
@@ -388,11 +386,12 @@ def set_channel_setting(name, value, channel):
for key in del_keys:
del dict_settings[key]
dict_settings[name] = value
# comprobamos si existe dict_file y es un diccionario, sino lo creamos
if dict_file is None or not dict_file:
dict_file = {}
dict_file['settings'] = dict_settings
# Creamos el archivo ../settings/channel_data.json

View File

@@ -218,6 +218,7 @@ def scrapeBlock(item, args, block, patron, headers, action, pagination, debug, t
scraped[kk] = val
if scraped['season']:
stagione = scraped['season']
episode = scraped['season'] +'x'+ scraped['episode']
elif stagione:
episode = stagione +'x'+ scraped['episode']
@@ -236,7 +237,7 @@ def scrapeBlock(item, args, block, patron, headers, action, pagination, debug, t
# make formatted Title [longtitle]
s = ' - '
title = episode + (s if episode and title else '') + title
title = episode + (s if episode and title else '') + title
longtitle = title + (s if title and title2 else '') + title2
longtitle = typo(longtitle, 'bold')
longtitle += typo(quality, '_ [] color kod') if quality else ''
@@ -400,7 +401,7 @@ def scrape(func):
if 'itemlistHook' in args:
itemlist = args['itemlistHook'](itemlist)
if (pagination and len(matches) <= pag * pagination) or not pagination: # next page with pagination
if patronNext and inspect.stack()[1][3] != 'newest':
nextPage(itemlist, item, data, patronNext, function)
@@ -682,7 +683,7 @@ def typo(string, typography=''):
kod_color = '0xFF65B3DA' #'0xFF0081C2'
string = str(string)
string = str(string.encode('utf8'))
# Check if the typographic attributes are in the string or outside
if typography:
string = string + ' ' + typography
@@ -910,7 +911,9 @@ def server(item, data='', itemlist=[], headers='', AutoPlay=True, CheckLinks=Tru
continue
videoitem.server = findS[2]
videoitem.title = findS[0]
item.title = item.contentTitle if config.get_localized_string(30161) in item.title else item.title
videoitem.url = findS[1]
item.title = item.contentTitle.strip() if item.contentType == 'movie' or (
config.get_localized_string(30161) in item.title) else item.title
videoitem.title = item.title + (typo(videoitem.title, '_ color kod []') if videoitem.title else "") + (typo(videoitem.quality, '_ color kod []') if videoitem.quality else "")
videoitem.fulltitle = item.fulltitle
videoitem.show = item.show

View File

@@ -370,13 +370,13 @@ def set_infoLabels_item(item, seekTmdb=True, idioma_busqueda=def_lang, lock=None
if temporada:
# Actualizar datos
__leer_datos(otmdb_global)
item.infoLabels['title'] = temporada['name']
if temporada['overview']:
item.infoLabels['title'] = temporada['name'] if temporada.has_key('name') else ''
if temporada.has_key('overview') and temporada['overview']:
item.infoLabels['plot'] = temporada['overview']
if temporada['air_date']:
if temporada.has_key('air_date') and temporada['air_date']:
date = temporada['air_date'].split('-')
item.infoLabels['aired'] = date[2] + "/" + date[1] + "/" + date[0]
if temporada['poster_path']:
if temporada.has_key('poster_path') and temporada['poster_path']:
item.infoLabels['poster_path'] = 'http://image.tmdb.org/t/p/original' + temporada['poster_path']
item.thumbnail = item.infoLabels['poster_path']

View File

@@ -469,47 +469,51 @@ class UnshortenIt(object):
def _unshorten_vcrypt(self, uri):
r = None
import base64, pyaes
try:
r = None
import base64, pyaes
def decrypt(str):
str = str.replace("_ppl_", "+").replace("_eqq_", "=").replace("_sll_", "/")
iv = "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
key = "naphajU2usWUswec"
decoded = base64.b64decode(str)
decoded = decoded + '\0' * (len(decoded) % 16)
crypt_object = pyaes.AESModeOfOperationCBC(key, iv)
decrypted = ''
for p in range(0, len(decoded), 16):
decrypted += crypt_object.decrypt(decoded[p:p + 16]).replace('\0', '')
return decrypted
if 'shield' in uri.split('/')[-2]:
uri = decrypt(uri.split('/')[-1])
else:
import datetime, hashlib
ip = urllib.urlopen('http://ip.42.pl/raw').read()
day = datetime.date.today().strftime('%Y%m%d')
headers = {
"Cookie": hashlib.md5(ip+day).hexdigest() + "=1"
}
uri = uri.replace('sb/', 'sb1/')
uri = uri.replace('akv/', 'akv1/')
uri = uri.replace('wss/', 'wss1/')
uri = uri.replace('wsd/', 'wsd1/')
r = httptools.downloadpage(uri, timeout=self._timeout, headers=headers, follow_redirects=False)
if 'Wait 1 hour' in r.data:
uri = ''
logger.info('IP bannato da vcrypt, aspetta un ora')
else:
uri = r.headers['location']
if "4snip" in uri:
if 'out_generator' in uri:
uri = re.findall('url=(.*)$', uri)[0]
elif '/decode/' in uri:
def decrypt(str):
str = str.replace("_ppl_", "+").replace("_eqq_", "=").replace("_sll_", "/")
iv = "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
key = "naphajU2usWUswec"
decoded = base64.b64decode(str)
decoded = decoded + '\0' * (len(decoded) % 16)
crypt_object = pyaes.AESModeOfOperationCBC(key, iv)
decrypted = ''
for p in range(0, len(decoded), 16):
decrypted += crypt_object.decrypt(decoded[p:p + 16]).replace('\0', '')
return decrypted
if 'shield' in uri.split('/')[-2]:
uri = decrypt(uri.split('/')[-1])
else:
import datetime, hashlib
ip = urllib.urlopen('http://ip.42.pl/raw').read()
day = datetime.date.today().strftime('%Y%m%d')
headers = {
"Cookie": hashlib.md5(ip+day).hexdigest() + "=1"
}
uri = uri.replace('sb/', 'sb1/')
uri = uri.replace('akv/', 'akv1/')
uri = uri.replace('wss/', 'wss1/')
uri = uri.replace('wsd/', 'wsd1/')
r = httptools.downloadpage(uri, timeout=self._timeout, headers=headers, follow_redirects=False)
if 'Wait 1 hour' in r.data:
uri = ''
logger.info('IP bannato da vcrypt, aspetta un ora')
else:
uri = r.headers['location']
return uri, r.code if r else 200
if "4snip" in uri:
if 'out_generator' in uri:
uri = re.findall('url=(.*)$', uri)[0]
elif '/decode/' in uri:
uri = decrypt(uri.split('/')[-1])
return uri, r.code if r else 200
except Exception as e:
return uri, str(e)
def unwrap_30x_only(uri, timeout=10):
unshortener = UnshortenIt()

View File

@@ -20,15 +20,12 @@ def get_addon_core():
def get_addon_version(with_fix=True):
'''
Trova la versione dell'addon, senza usare le funzioni di kodi perchè non si aggiornano fino al riavvio
Devuelve el número de versión del addon, y opcionalmente número de fix si lo hay
'''
info = open(os.path.join(get_runtime_path(), 'addon.xml')).read()
ver = re.search('plugin.video.kod.*?version="([^"]+)"', info).group(1)
if with_fix:
return ver + " " + get_addon_version_fix()
return __settings__.getAddonInfo('version') + " " + get_addon_version_fix()
else:
return ver
return __settings__.getAddonInfo('version')
def get_addon_version_fix():

View File

@@ -549,7 +549,7 @@ def set_context_commands(item, parent_item):
from_action=item.action).tourl())))
# Añadir a Alfavoritos (Mis enlaces)
if item.channel not in ["favorites", "videolibrary", "help", ""] and parent_item.channel != "favorites":
context_commands.append(('[COLOR blue]%s[/COLOR]' % config.get_localized_string(70557), "XBMC.RunPlugin(%s?%s)" %
context_commands.append(("[B]%s[/B]" % config.get_localized_string(70557), "XBMC.RunPlugin(%s?%s)" %
(sys.argv[0], item.clone(channel="kodfavorites", action="addFavourite",
from_channel=item.channel,
from_action=item.action).tourl())))
@@ -571,7 +571,7 @@ def set_context_commands(item, parent_item):
mediatype = 'tv'
else:
mediatype = item.contentType
context_commands.append(("[COLOR yellow]%s[/COLOR]" % config.get_localized_string(70561), "XBMC.Container.Update (%s?%s)" % (
context_commands.append(("[B]%s[/B]" % config.get_localized_string(70561), "XBMC.Container.Update (%s?%s)" % (
sys.argv[0], item.clone(channel='search', action='discover_list', search_type='list', page='1',
list_type='%s/%s/similar' % (mediatype,item.infoLabels['tmdb_id'])).tourl())))

View File

@@ -80,6 +80,15 @@ def check_addon_init():
logger.info('aggiornando a ' + commitJson['sha'])
alreadyApplied = True
# major update
if len(commitJson['files']) > 50:
localCommitFile.close()
c['sha'] = updateFromZip('Aggiornamento in corso...')
localCommitFile = open(addonDir + trackingFile, 'w') # il file di tracking viene eliminato, lo ricreo
changelog += commitJson['commit']['message'] + " | "
nCommitApplied += 3 # il messaggio sarà lungo, probabilmente, il tempo di vis. è maggiorato
break
for file in commitJson['files']:
if file["filename"] == trackingFile: # il file di tracking non si modifica
continue
@@ -215,12 +224,13 @@ def getSha(path):
f.seek(0)
return githash.blob_hash(f, size).hexdigest()
def getShaStr(str):
return githash.blob_hash(StringIO(str), len(str)).hexdigest()
def updateFromZip():
dp = platformtools.dialog_progress_bg('Kodi on Demand', 'Installazione in corso...')
def updateFromZip(message='Installazione in corso...'):
dp = platformtools.dialog_progress_bg('Kodi on Demand', message)
dp.update(0)
remotefilename = 'https://github.com/' + user + "/" + repo + "/archive/" + branch + ".zip"
@@ -278,12 +288,32 @@ def remove(file):
logger.info('File ' + file + ' NON eliminato')
def onerror(func, path, exc_info):
"""
Error handler for ``shutil.rmtree``.
If the error is due to an access error (read only file)
it attempts to add write permission and then retries.
If the error is for another reason it re-raises the error.
Usage : ``shutil.rmtree(path, onerror=onerror)``
"""
import stat
if not os.access(path, os.W_OK):
# Is the error an access error ?
os.chmod(path, stat.S_IWUSR)
func(path)
else:
raise
def removeTree(dir):
if os.path.isdir(dir):
try:
shutil.rmtree(dir)
except:
shutil.rmtree(dir, ignore_errors=False, onerror=onerror)
except Exception as e:
logger.info('Cartella ' + dir + ' NON eliminata')
logger.error(e)
def rename(dir1, dir2):

View File

@@ -12,8 +12,8 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
encontrados = {
'https://vcrypt.net/images/logo', 'https://vcrypt.net/css/out',
'https://vcrypt.net/images/favicon', 'https://vcrypt.net/css/open',
'http://linkup.pro/js/jquery', 'https://linkup.pro/js/jquery',
'http://www.rapidcrypt.net/open'
'http://linkup.pro/js/jquery', 'https://linkup.pro/js/jquery'#,
#'http://www.rapidcrypt.net/open'
}
devuelve = []
@@ -47,13 +47,18 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
data, status = unshortenit.unshorten(url)
logger.info("Data - Status zcrypt xshield.net: [%s] [%s] " %(data, status))
elif 'vcrypt.net' in url:
from lib import unshortenit
data, status = unshortenit.unshorten(url)
logger.info("Data - Status zcrypt vcrypt.net: [%s] [%s] " %(data, status))
if 'myfoldersakstream.php' in url or '/verys/' in url:
continue
else:
from lib import unshortenit
data, status = unshortenit.unshorten(url)
logger.info("Data - Status zcrypt vcrypt.net: [%s] [%s] " %(data, status))
elif 'linkup' in url or 'bit.ly' in url:
idata = httptools.downloadpage(url).data
data = scrapertoolsV2.find_single_match(idata, "<iframe[^<>]*src=\\'([^'>]*)\\'[^<>]*>")
#fix by greko inizio
if '/olink/' in url: continue
else:
idata = httptools.downloadpage(url).data
data = scrapertoolsV2.find_single_match(idata, "<iframe[^<>]*src=\\'([^'>]*)\\'[^<>]*>")
#fix by greko inizio
if not data:
data = scrapertoolsV2.find_single_match(idata, 'action="(?:[^/]+.*?/[^/]+/([a-zA-Z0-9_]+))">')
from lib import unshortenit
@@ -84,6 +89,8 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
for url in matches:
if url not in encontrados:
if 'https://rapidcrypt.net/open/' in url or 'https://rapidcrypt.net/verys/' in url:
continue
logger.info(" url=" + url)
encontrados.add(url)
@@ -96,5 +103,3 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
ret = page_url+" "+str(devuelve) if devuelve else page_url
logger.info(" RET=" + str(ret))
return ret

View File

@@ -19,8 +19,6 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
logger.info()
itemlist = []
logger.info(page_url)
page_url = 'https://hdload.space/getHost/' + scrapertoolsV2.find_single_match(page_url, 'https://hdload\.space/public/dist/index\.html\?id=([a-z0-9]+)')
logger.info(page_url)
data = httptools.downloadpage(page_url, post='').data
logger.info(data)

46
servers/supervideo.json Normal file
View File

@@ -0,0 +1,46 @@
{
"active": true,
"find_videos": {
"ignore_urls": [],
"patterns": [
{
"pattern": "supervideo.tv/embed-([a-z0-9]{12}).html",
"url": "https://supervideo.tv/embed-\\1.html"
},
{
"pattern": "supervideo.tv/([a-z0-9]{12})",
"url": "https://supervideo.tv/embed-\\1.html"
}
]
},
"free": true,
"id": "supervideo",
"name": "SuperVideo",
"settings": [
{
"default": false,
"enabled": true,
"id": "black_list",
"label": "@60654",
"type": "bool",
"visible": true
},
{
"default": 0,
"enabled": true,
"id": "favorites_servers_list",
"label": "@60655",
"lvalues": [
"No",
"1",
"2",
"3",
"4",
"5"
],
"type": "list",
"visible": false
}
],
"thumbnail": "https://supervideo.tv/images/logo-player.png"
}

34
servers/supervideo.py Normal file
View File

@@ -0,0 +1,34 @@
# -*- coding: utf-8 -*-
from core import httptools
from core import scrapertoolsV2
from lib import jsunpack
from platformcode import config, logger
import ast
def test_video_exists(page_url):
logger.info("(page_url='%s')" % page_url)
data = httptools.downloadpage(page_url, cookies=False).data
if 'File Not Found' in data:
return False, config.get_localized_string(70449) % "SuperVideo"
return True, ""
def get_video_url(page_url, premium=False, user="", password="", video_password=""):
logger.info("url=" + page_url)
video_urls = []
data = httptools.downloadpage(page_url).data
code = jsunpack.unpack(scrapertoolsV2.find_single_match(data, "<script type='text/javascript'>(eval.*)"))
match = scrapertoolsV2.find_single_match(code, 'sources:(\[[^]]+\])')
lSrc = ast.literal_eval(match)
lQuality = ['360p', '720p', '1080p', '4k'][:len(lSrc)-1]
lQuality.reverse()
for n, source in enumerate(lSrc):
quality = 'auto' if n==0 else lQuality[n-1]
video_urls.append(['.' + source.split('.')[-1] + '(' + quality + ') [SuperVideo]', source])
return video_urls

View File

@@ -7,12 +7,24 @@
"find_videos": {
"patterns": [
{
"pattern": "wstream\\.video/video\\.php\\?file_code=([a-z0-9A-Z]+)",
"url": "http:\/\/wstream.video\/\\1"
"pattern": "https://wstream.video/stream/switch_embed.php\\?file_code=([a-z0-9A-Z]+)",
"url": "https://wstream.video/video.php?file_code=\\1"
},
{
"pattern":"wstream.video\/api\/vcmod\/fastredirect\/streaming.php\\?id=([0-9a-zA-Z]+)",
"url": "https://wstream.video/api/vcmod/fastredirect/streaming.php?id=\\1"
},
{
"pattern": "wstream\\.video/(?:embed-|videos/|video/|videow/|videoj/)?([a-z0-9A-Z]+)",
"url": "http:\/\/wstream.video\/\\1"
"pattern": "wstream\\.video/video\\.php\\?file_code=([a-z0-9A-Z]+)",
"url": "https://wstream.video/video.php?file_code=\\1"
},
{
"pattern": "wstream\\.video\/(?:embed-|videos/|video/|videow/|videoj/)([a-z0-9A-Z]+)",
"url": "https://wstream.video/video.php?file_code=\\1"
},
{
"pattern": "wstream\\.video/(?!api/|stream/)([a-z0-9A-Z]+)",
"url": "https://wstream.video/video.php?file_code=\\1"
}
],
"ignore_urls": [ ]

View File

@@ -17,13 +17,22 @@ def test_video_exists(page_url):
return False, "[wstream.py] Il File Non esiste"
return True, ""
# Returns an array of possible video url's from the page_url
def get_video_url(page_url, premium=False, user="", password="", video_password=""):
# import web_pdb; web_pdb.set_trace()
logger.info("[wstream.py] url=" + page_url)
video_urls = []
data = httptools.downloadpage(page_url, headers=headers, follow_redirects=True).data.replace('https','http')
if '/streaming.php' in page_url:
code = httptools.downloadpage(page_url, headers=headers, follow_redirects=False).headers['location'].split('/')[-1]
page_url = 'https://wstream.video/video.php?file_code=' + code
code = page_url.split('=')[-1]
post = urllib.urlencode({
'videox': code
})
data = httptools.downloadpage(page_url, headers=headers, post=post, follow_redirects=True).data.replace('https','http')
logger.info("[wstream.py] data=" + data)
vid = scrapertools.find_multiple_matches(data, 'download_video.*?>.*?<.*?<td>([^\,,\s]+)')
headers.append(['Referer', page_url])
@@ -33,48 +42,19 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
data = jsunpack.unpack(post_data)
logger.info("[wstream.py] data=" + data)
block = scrapertools.find_single_match(data, 'sources:\s*\[[^\]]+\]')
if block:
data = block
data = block
media_urls = scrapertools.find_multiple_matches(data, '(http.*?\.mp4)')
_headers = urllib.urlencode(dict(headers))
i = 0
media_urls = scrapertools.find_multiple_matches(data, '(http.*?\.mp4)')
_headers = urllib.urlencode(dict(headers))
i = 0
for media_url in media_urls:
video_urls.append([vid[i] if vid else 'video' + " mp4 [wstream] ", media_url + '|' + _headers])
i = i + 1
for media_url in media_urls:
video_urls.append([vid[i] if vid else 'video' + " mp4 [wstream] ", media_url + '|' + _headers])
i = i + 1
for video_url in video_urls:
logger.info("[wstream.py] %s - %s" % (video_url[0], video_url[1]))
for video_url in video_urls:
logger.info("[wstream.py] %s - %s" % (video_url[0], video_url[1]))
logger.info(video_urls)
logger.info(video_urls)
return video_urls
else:
page_urls = scrapertools.find_multiple_matches(data, '''<a href=(?:"|')([^"']+)(?:"|')''')
for page_url in page_urls:
if '404 Not Found' not in httptools.downloadpage(page_url, headers=headers).data.replace('https', 'http'):
return get_video_url(page_url)
def find_videos(data):
encontrados = set()
devuelve = []
patronvideos = r"wstream.video/(?:embed-|videos/|video/)?([a-z0-9A-Z]+)"
logger.info("[wstream.py] find_videos #" + patronvideos + "#")
matches = re.compile(patronvideos, re.DOTALL).findall(data)
for match in matches:
titulo = "[wstream]"
url = 'http://wstream.video/%s' % match
if url not in encontrados:
logger.info(" url=" + url)
devuelve.append([titulo, url, 'wstream'])
encontrados.add(url)
else:
logger.info(" url duplicada=" + url)
return devuelve
return video_urls

View File

@@ -444,7 +444,7 @@ def get_seasons(item):
if inspect.stack()[1][3] in ['add_tvshow', "get_seasons"] or show_seasons == False:
it = []
for item in itemlist:
it += episodios(item)
if os.path.isfile(item.url): it += episodios(item)
itemlist = it
if inspect.stack()[1][3] not in ['add_tvshow', 'get_episodes', 'update', 'find_episodes', 'get_newest']:
@@ -471,6 +471,7 @@ def get_seasons(item):
def episodios(item):
# support.dbg()
support.log()
itm = item
@@ -489,8 +490,13 @@ def episodios(item):
if pagination and (pag - 1) * pagination > i: continue # pagination
if pagination and i >= pag * pagination: break # pagination
match = []
if episode.has_key('number'): match = support.match(episode['number'], r'(?P<season>\d+)x(?P<episode>\d+)')[0][0]
if not match and episode.has_key('title'): match = support.match(episode['title'], r'(?P<season>\d+)x(?P<episode>\d+)')[0][0]
if episode.has_key('number'):
match = support.match(episode['number'], r'(?P<season>\d+)x(?P<episode>\d+)')[0]
if match:
match = match[0]
if not match and episode.has_key('title'):
match = support.match(episode['title'], r'(?P<season>\d+)x(?P<episode>\d+)')[0]
if match: match = match[0]
if match:
episode_number = match[1]
ep = int(match[1]) + 1
@@ -505,27 +511,27 @@ def episodios(item):
episode_number = str(ep).zfill(2)
ep += 1
infoLabels['season'] = season_number
infoLabels['episode'] = episode_number
infoLabels['season'] = season_number
infoLabels['episode'] = episode_number
plot = episode['plot'] if episode.has_key('plot') else item.plot
thumbnail = episode['poster'] if episode.has_key('poster') else episode['thumbnail'] if episode.has_key('thumbnail') else item.thumbnail
plot = episode['plot'] if episode.has_key('plot') else item.plot
thumbnail = episode['poster'] if episode.has_key('poster') else episode['thumbnail'] if episode.has_key('thumbnail') else item.thumbnail
title = ' - ' + episode['title'] if episode.has_key('title') else ''
title = '%sx%s%s' % (season_number, episode_number, title)
if season_number == item.filter or not item.filterseason:
itemlist.append(Item(channel= item.channel,
title= format_title(title),
fulltitle = item.fulltitle,
show = item.show,
url= episode,
action= 'findvideos',
plot= plot,
thumbnail= thumbnail,
contentSeason= season_number,
contentEpisode= episode_number,
infoLabels= infoLabels,
contentType= 'episode'))
title = ' - ' + episode['title'] if episode.has_key('title') else ''
title = '%sx%s%s' % (season_number, episode_number, title)
if season_number == item.filter or not item.filterseason:
itemlist.append(Item(channel= item.channel,
title= format_title(title),
fulltitle = item.fulltitle,
show = item.show,
url= episode,
action= 'findvideos',
plot= plot,
thumbnail= thumbnail,
contentSeason= season_number,
contentEpisode= episode_number,
infoLabels= infoLabels,
contentType= 'episode'))
if show_seasons == True and inspect.stack()[1][3] not in ['add_tvshow', 'get_episodes', 'update', 'find_episodes'] and not item.filterseason:

View File

@@ -149,7 +149,7 @@ def get_channels_list():
continue
# No incluir si el canal es en un idioma filtrado
if channel_language != "all" and channel_language not in channel_parameters["language"] \
if channel_language != "all" and channel_language not in str(channel_parameters["language"]) \
and "*" not in channel_parameters["language"]:
continue
@@ -392,7 +392,7 @@ def get_newest(channel_id, categoria):
def get_title(item):
support.log("ITEM NEWEST ->", item)
#support.log("ITEM NEWEST ->", item)
# item.contentSerieName c'è anche se è un film
if item.contentSerieName and item.contentType != 'movie': # Si es una serie
title = item.contentSerieName
@@ -624,7 +624,7 @@ def setting_channel(item):
continue
# No incluir si el canal es en un idioma filtrado
if channel_language != "all" and channel_language not in channel_parameters["language"] \
if channel_language != "all" and channel_language not in str(channel_parameters["language"]) \
and "*" not in channel_parameters["language"]:
continue
@@ -671,7 +671,7 @@ def cb_custom_button(item, dict_values):
dict_values[v] = not value
if config.set_setting("custom_button_value_news", not value, item.channel) == True:
return {"label": "Ninguno"}
return {"label": config.get_localized_string(59992)}
else:
return {"label": "Todos"}
return {"label": config.get_localized_string(59991)}

View File

@@ -256,7 +256,7 @@ def setting_channel_old(item):
continue
# No incluir si el canal es en un idioma filtrado
if channel_language != "all" and channel_language not in channel_parameters["language"] \
if channel_language != "all" and channel_language not in str(channel_parameters["language"]) \
and "*" not in channel_parameters["language"]:
continue
@@ -524,7 +524,7 @@ def do_search(item, categories=None):
continue
# No busca si el canal es en un idioma filtrado
if channel_language != "all" and channel_language not in channel_parameters["language"] \
if channel_language != "all" and channel_language not in str(channel_parameters["language"]) \
and "*" not in channel_parameters["language"]:
logger.info("%s -idioma no válido-" % basename_without_extension)
continue

View File

@@ -705,7 +705,7 @@ def do_channels_search(item):
continue
# No busca si el canal es en un idioma filtrado
if channel_language != "all" and channel_parameters["language"] != channel_language:
if channel_language != "all" and channel_language not in str(channel_parameters["language"]):
continue
# No busca si es un canal excluido de la busqueda global