Eurostreaming e altri file (#32)

* Altadefinizione01 L

speriamo...io me la cavo!

* eurostreaming

Sostituiscono gli attuali che hanno i seguenti problemi:
1. Non aprono tutte le serie, in quanto nella pagina del sito si deve cliccare su una voce per aprire la lista degli episodi
2. Quando si aggiungono una serie nella videoteca e si hanno episodi in italiano e sottotitolati, vengono aggiunti correttamente i titoli in italiano ma i video sono sottotitolati.

* Update unify.py

Proposta per italianizzare le thumb!

* Add files via upload

* Add files via upload

* Delete altadefinizione01_link.json

ops!

* Delete altadefinizione01_link.py

ariops!

* Add files via upload

aggiunti i server in lista_servers

* Update eurostreaming.py


aggiunto autoplay nella home menu

* Altadefinizione 2

Ci sono problemi con la ricerca dei server. Prende o solo openload o quello e un altro

* Update altadefinizione_2.json

tolta la parte serie tv

* Aggiornamento canale

Tolte le voci su TMDB che indicavano la lingua italiana e fatto pulizia di alcuni commenti

* Delete altadefinizione_2.json

da modificare

* Delete altadefinizione_2.py

da modificare

* Cambio url

* fix vari

tra cui l'inserimento in videoteca dei giusti video: o ita o sub-ita

* Riscrittura canali alla KOD

Modificate alcune voci alla maniera di KOD.
Da ultimare perchè le voci nel menu:
lettera e anno
non danno le giuste icone...

* Fix completo

Riscrittura del canale alla KOD, o almeno in parte!

* Piccola aggiunta alle voci 

Per visualizzare le icone su alcune voci del menu

* Riscrittura canale

Riscritto il canale.
Per delle icone del menu vengono inoltre richieste delle modifiche al file channelselector.py
in particolare:
                     'lucky': ['fortunato'], # se potete inserire la icona anche per questa voce
                     'channels_musical':['musical'],
                     'channels_mistery':['mistero', 'giallo'],
                     'channels_noir':['noir'],
                     'popular' : ['popolari','popolare', 'più visti'],
                     'channels_thriller':['thriller'],
                     'top_rated' : ['fortunato'], #da tocgliere aggiunte la voce lucky o quello che volete
                     'channels_western':['western'],

* Update altadefinizione01_club.py

commentato:FilterTools

* Update altadefinizione01_link.py

commentato: FilterTools

* Update altadefinizione01_club.py

sistemato un errore

* Add files via upload

Fixato e rifixato.
Dovrebbe essere ok

* Set theme jekyll-theme-midnight

* Update channelselector.py

* Update channelselector.py

* Update channelselector.py

* sono stati aggiunti e/o modificati dei canali per farli trovare
inoltre è stato modificato il support per adattarlo al canale eurostreaming
sperando ce ne siano altri simili

* eurostreaming e altri file
This commit is contained in:
greko
2019-05-18 13:39:26 +02:00
committed by mac12m99
parent 5eb7d4927f
commit 875205ca83
12 changed files with 376 additions and 375 deletions

1
_config.yml Normal file
View File

@@ -0,0 +1 @@
theme: jekyll-theme-midnight

View File

@@ -4,8 +4,8 @@
"active": true,
"adult": false,
"language": ["ita"],
"thumbnail": "",
"bannermenu": "",
"thumbnail": "https://eurostreaming.cafe/wp-content/uploads/2017/08/logocafe.png",
"bannermenu": "https://eurostreaming.cafe/wp-content/uploads/2017/08/logocafe.png",
"categories": ["tvshow","anime"],
"settings": [
{
@@ -66,8 +66,8 @@
"visible": true,
"lvalues": [
"Non filtrare",
"ITA",
"SUB ITA"
"Italiano",
"vosi"
]
},
{

View File

@@ -1,281 +1,133 @@
# -*- coding: utf-8 -*-
# -*- Created or modificated for Alfa-Addon -*-
# -*- adpted for KOD -*-
# -*- By Greko -*-
# ------------------------------------------------------------
# Canale per Eurostreaming
# adattamento di Cineblog01
# by Greko
# ------------------------------------------------------------
"""
Riscritto per poter usufruire del modulo support.
Problemi noti:
Alcun regex possono migliorare
server versystream : 'http://vcrypt.net/very/' # VeryS non decodifica il link :http://vcrypt.net/fastshield/
server nowvideo.club da implementare nella cartella servers, altri server nei meandri del sito?!
Alcune sezioni di anime-cartoni non vanno, alcune hanno solo la lista degli episodi, ma non hanno link
altre cambiano la struttura
La sezione novità non fa apparire il titolo degli episodi
"""
#import base64
import re
import urlparse
# gli import sopra sono da includere all'occorrenza
# per url con ad.fly
from lib import unshortenit
from channelselector import get_thumb
from channels import autoplay
from channels import filtertools
from core import httptools
from core import scrapertoolsV2
from core import servertools
from channels import autoplay, filtertools, support
from core import scrapertoolsV2, httptools, servertools, tmdb
from core.item import Item
from core import channeltools
from core import tmdb
from platformcode import config, logger
from platformcode import logger, config
__channel__ = "eurostreaming" #stesso di id nel file json
#host = "https://eurostreaming.zone/"
#host = "https://eurostreaming.black/"
host = "https://eurostreaming.cafe/" #aggiornato al 30-04-2019
host = "https://eurostreaming.cafe/"
headers = ['Referer', host]
# ======== def per utility INIZIO =============================
try:
__modo_grafico__ = config.get_setting('modo_grafico', __channel__)
__perfil__ = int(config.get_setting('perfil', __channel__))
except:
__modo_grafico__ = True
__perfil__ = 0
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00', '0xFFFE2E2E', '0xFFFFD700'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E', '0xFFFE2E2E', '0xFFFFD700'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE', '0xFFFE2E2E', '0xFFFFD700']]
if __perfil__ < 3:
color1, color2, color3, color4, color5 = perfil[__perfil__]
else:
color1 = color2 = color3 = color4 = color5 = ""
__comprueba_enlaces__ = config.get_setting('comprueba_enlaces', __channel__)
__comprueba_enlaces_num__ = config.get_setting('comprueba_enlaces_num', __channel__)
headers = [['User-Agent', 'Mozilla/50.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
['Referer', host]]#,['Accept-Language','it-IT,it;q=0.8,en-US;q=0.5,en;q=0.3']]
parameters = channeltools.get_channel_parameters(__channel__)
fanart_host = parameters['fanart']
thumbnail_host = parameters['thumbnail']
IDIOMAS = {'Italiano': 'IT', 'VOSI':'SUB ITA'}
IDIOMAS = {'Italiano': 'IT'}
list_language = IDIOMAS.values()
# per l'autoplay
list_servers = ['openload', 'speedvideo', 'wstream', 'streamango' 'flashx', 'nowvideo']
list_quality = ['default']
list_servers = ['verystream', 'wstream', 'speedvideo', 'flashx', 'nowvideo', 'streamango', 'deltabit', 'openload']
list_quality = ['default']
# =========== home menu ===================
__comprueba_enlaces__ = config.get_setting('comprueba_enlaces', 'eurostreaming')
__comprueba_enlaces_num__ = config.get_setting('comprueba_enlaces_num', 'eurostreaming')
def mainlist(item):
logger.info("icarus.eurostreaming mainlist")
support.log()
itemlist = []
title = ''
support.menu(itemlist, 'Serie TV', 'serietv', host, 'episode') # mettere sempre episode per serietv, anime!!
support.menu(itemlist, 'Serie TV Archivio submenu', 'serietv', host + "category/serie-tv-archive/", 'episode')
support.menu(itemlist, 'Ultimi Aggiornamenti submenu', 'serietv', host + 'aggiornamento-episodi/', 'episode', args='True')
support.menu(itemlist, 'Anime / Cartoni', 'serietv', host + 'category/anime-cartoni-animati/', 'episode')
support.menu(itemlist, 'Cerca...', 'search', host, 'episode')
# richiesto per autoplay
autoplay.init(item.channel, list_servers, list_quality)
itemlist = [
Item(channel=__channel__, title="Serie TV",
contentTitle = __channel__, action="serietv",
#extra="tvshow",
text_color=color4,
url="%s/category/serie-tv-archive/" % host,
infoLabels={'plot': item.category},
thumbnail = get_thumb(title, auto = True)
),
Item(channel=__channel__, title="Ultimi Aggiornamenti",
contentTitle = __channel__, action="elenco_aggiornamenti_serietv",
text_color=color4, url="%saggiornamento-episodi/" % host,
#category = __channel__,
extra="tvshow",
infoLabels={'plot': item.category},
thumbnail = get_thumb(title, auto = True)
),
Item(channel=__channel__,
title="Anime / Cartoni",
action="serietv",
extra="tvshow",
text_color=color4,
url="%s/category/anime-cartoni-animati/" % host,
thumbnail= get_thumb(title, auto = True)
),
Item(channel=__channel__,
title="[COLOR yellow]Cerca...[/COLOR]",
action="search",
extra="tvshow",
text_color=color4,
thumbnail= get_thumb(title, auto = True)
),
]
autoplay.show_option(item.channel, itemlist)
return itemlist
# ======== def in ordine di menu ===========================
def serietv(item):
logger.info("%s serietv log: %s" % (__channel__, item))
def serietv(item):
support.log()
itemlist = []
# Carica la pagina
data = httptools.downloadpage(item.url).data
# Estrae i contenuti
patron = '<div class="post-thumb">\s*<a href="([^"]+)" title="([^"]+)">\s*<img src="([^"]+)"'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle, scrapedthumbnail in matches:
#scrapedplot = ""
scrapedtitle = scrapertoolsV2.decodeHtmlentities(scrapedtitle)#.replace("Streaming", ""))
if scrapedtitle.startswith("Link to "):
scrapedtitle = scrapedtitle[8:]
num = scrapertoolsV2.find_single_match(scrapedurl, '(-\d+/)')
if num:
scrapedurl = scrapedurl.replace(num, "-episodi/")
itemlist.append(
Item(channel=item.channel,
action="episodios",
#contentType="tvshow",
contentSerieName = scrapedtitle,
title=scrapedtitle,
#text_color="azure",
url=scrapedurl,
thumbnail=scrapedthumbnail,
#plot=scrapedplot,
show=item.show,
extra=item.extra,
folder=True
))
# locandine e trama e altro da tmdb se presente l'anno migliora la ricerca
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True, idioma_busqueda='it')
# Paginazione
patronvideos = '<a class="next page-numbers" href="?([^>"]+)">Avanti &raquo;</a>'
matches = re.compile(patronvideos, re.DOTALL).findall(data)
if len(matches) > 0:
scrapedurl = urlparse.urljoin(item.url, matches[0])
itemlist.append(
Item(
channel=item.channel,
action="serietv",
title="[COLOR lightgreen]" + config.get_localized_string(30992) + "[/COLOR]",
url=scrapedurl,
thumbnail=
"http://2.bp.blogspot.com/-fE9tzwmjaeQ/UcM2apxDtjI/AAAAAAAAeeg/WKSGM2TADLM/s1600/pager+old.png",
extra=item.extra,
folder=True))
if item.args:
# il titolo degli episodi viene inglobato in episode ma non sono visibili in newest!!!
patron = r'<span class="serieTitle" style="font-size:20px">(.*?).[^]<a href="([^"]+)"\s+target="_blank">(.*?)<\/a>'
listGroups = ['title', 'url', 'episode']
patronNext = ''
else:
patron = r'<div class="post-thumb">.*?\s<img src="([^"]+)".*?><a href="([^"]+)".*?>(.*?(?:\((\d{4})\)|(\d{4}))?)<\/a><\/h2>'
listGroups = ['thumb', 'url', 'title', 'year', 'year']
patronNext='a class="next page-numbers" href="?([^>"]+)">Avanti &raquo;</a>'
itemlist = support.scrape(item, patron_block='', patron=patron, listGroups=listGroups,
patronNext=patronNext,
action='episodios')
return itemlist
def episodios(item):
#logger.info("%s episodios log: %s" % (__channel__, item))
support.log()
itemlist = []
if not(item.lang):
lang_season = {'ITA':0, 'SUB ITA' :0}
# Download pagina
# Carica la pagina
data = httptools.downloadpage(item.url).data
#========
if 'clicca qui per aprire' in data.lower():
item.url = scrapertoolsV2.find_single_match(data, '"go_to":"(.*?)"')
item.url = item.url.replace("\\","")
# Carica la pagina
data = httptools.downloadpage(item.url).data
#========
if 'clicca qui per aprire' in data.lower():
logger.info("%s CLICCA QUI PER APRIRE GLI EPISODI log: %s" % (__channel__, item))
item.url = scrapertoolsV2.find_single_match(data, '"go_to":"(.*?)"')
item.url = item.url.replace("\\","")
# Carica la pagina
data = httptools.downloadpage(item.url).data
#logger.info("%s FINE CLICCA QUI PER APRIRE GLI EPISODI log: %s" % (__channel__, item))
elif 'clicca qui</span>' in data.lower():
logger.info("%s inizio CLICCA QUI</span> log: %s" % (__channel__, item))
item.url = scrapertoolsV2.find_single_match(data, '<h2 style="text-align: center;"><a href="(.*?)">')
data = httptools.downloadpage(item.url).data
#logger.info("%s fine CLICCA QUI</span> log: %s" % (__channel__, item))
#=========
data = scrapertoolsV2.decodeHtmlentities(data)
bloque = scrapertoolsV2.find_single_match(data, '<div class="su-accordion">(.*?)<div class="clear"></div>')
patron = '<span class="su-spoiler-icon"></span>(.*?)</div>'
matches = scrapertoolsV2.find_multiple_matches(bloque, patron)
for scrapedseason in matches:
#logger.info("%s scrapedseason log: %s" % (__channel__, scrapedseason))
if "(SUB ITA)" in scrapedseason.upper():
lang = "SUB ITA"
lang_season['SUB ITA'] +=1
else:
lang = "ITA"
lang_season['ITA'] +=1
#logger.info("%s lang_dict log: %s" % (__channel__, lang_season))
for lang in sorted(lang_season):
if lang_season[lang] > 0:
itemlist.append(
Item(channel = item.channel,
action = "episodios",
#contentType = "episode",
contentSerieName = item.title,
title = '%s (%s)' % (item.title, lang),
url = item.url,
fulltitle = item.title,
data = data,
lang = lang,
show = item.show,
folder = True,
))
elif 'clicca qui</span>' in data.lower():
item.url = scrapertoolsV2.find_single_match(data, '<h2 style="text-align: center;"><a href="(.*?)">')
# Carica la pagina
data = httptools.downloadpage(item.url).data
#=========
matches = scrapertoolsV2.find_multiple_matches(data,
r'<span class="su-spoiler-icon"><\/span>(.*?)</div></div>')
for match in matches:
blocks = scrapertoolsV2.find_multiple_matches(match, r'(?:(\d&#215;[a-zA-Z0-9].*?))<br \/>')
season_lang = scrapertoolsV2.find_single_match(match, r'<\/span>.*?STAGIONE\s+\d+\s\(([^<>]+)\)').strip()
logger.info("blocks log: %s" % ( blocks ))
for block in blocks:
season_n, episode_n = scrapertoolsV2.find_single_match(block, r'(\d+)(?:&#215;|×)(\d+)')
titolo = scrapertoolsV2.find_single_match(block, r'[&#;]\d+[ ](?:([a-zA-Z0-9;&#\s]+))[ ]?(?:[^<>])')
logger.info("block log: %s" % ( block ))
# locandine e trama e altro da tmdb se presente l'anno migliora la ricerca
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True, idioma_busqueda='it')
return itemlist
titolo = re.sub(r'&#215;|×', "x", titolo).replace("&#8217;","'")
item.infoLabels['season'] = season_n # permette di vedere il plot della stagione e...
item.infoLabels['episode'] = episode_n # permette di vedere il plot della puntata e...
itemlist.append(
Item(channel=item.channel,
action="findvideos",
contentType=item.contentType,
title="[B]" + season_n + "x" + episode_n + " " + titolo + "[/B] " + season_lang,
fulltitle=item.title, # Titolo nel video
show=titolo + ":" + season_n + "x" + episode_n, # sottotitoletto nel video
url=block,
extra=item.extra,
thumbnail=item.thumbnail,
infoLabels=item.infoLabels
))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
support.videolibrary(itemlist, item)
else:
# qui ci vanno le puntate delle stagioni
html = item.data
logger.info("%s else log: [%s]" % (__channel__, item))
return itemlist
if item.lang == 'SUB ITA':
item.lang = '\(SUB ITA\)'
logger.info("%s item.lang log: %s" % (__channel__, item.lang))
bloque = scrapertoolsV2.find_single_match(html, '<div class="su-accordion">(.*?)<div class="clear"></div>')
patron = '<span class="su-spoiler-icon"></span>.*?'+item.lang+'</div>(.*?)</div>' # leggo tutte le stagioni
#logger.info("%s patronpatron log: %s" % (__channel__, patron))
matches = scrapertoolsV2.find_multiple_matches(bloque, patron)
for scrapedseason in matches:
#logger.info("%s scrapedseasonscrapedseason log: %s" % (__channel__, scrapedseason))
scrapedseason = scrapedseason.replace('<strong>','').replace('</strong>','')
patron = '(\d+)×(\d+)(.*?)<(.*?)<br />' # stagione - puntanta - titolo - gruppo link
matches = scrapertoolsV2.find_multiple_matches(scrapedseason, patron)
for scrapedseason, scrapedpuntata, scrapedtitolo, scrapedgroupurl in matches:
#logger.info("%s finale log: %s" % (__channel__, patron))
scrapedtitolo = scrapedtitolo.replace('','')
itemlist.append(Item(channel = item.channel,
action = "findvideos",
contentType = "episode",
#contentSerieName = item.contentSerieName,
contentTitle = scrapedtitolo,
title = '%sx%s %s' % (scrapedseason, scrapedpuntata, scrapedtitolo),
url = scrapedgroupurl,
fulltitle = item.fulltitle,
#show = item.show,
#folder = True,
))
logger.info("%s itemlistitemlist log: %s" % (__channel__, itemlist))
# Opción "Añadir esta película a la biblioteca de KODI"
if item.extra != "library":
if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'findvideos':
itemlist.append(Item(channel=item.channel, title="%s" % config.get_localized_string(30161),
text_color="green", extra="episodios",
action="add_serie_to_library", url=item.url,
thumbnail= get_thumb('videolibrary', auto = True),
contentTitle=item.contentSerieName, lang = item.lang,
show=item.show, data = html
#, infoLabels = item.infoLabels
))
return itemlist
# =========== def ricerca =============
def search(item, texto):
#logger.info("[eurostreaming.py] " + item.url + " search " + texto)
logger.info("%s search log: %s" % (__channel__, item))
support.log()
item.url = "%s?s=%s" % (host, texto)
try:
return serietv(item)
# Continua la ricerca in caso di errore
@@ -287,16 +139,16 @@ def search(item, texto):
# =========== def novità in ricerca globale =============
def newest(categoria):
logger.info("%s newest log: %s" % (__channel__, categoria))
support.log()
itemlist = []
item = Item()
try:
item.args= 'True'
item.url = "%saggiornamento-episodi/" % host
item.action = "elenco_aggiornamenti_serietv"
itemlist = elenco_aggiornamenti_serietv(item)
item.action = "serietv"
itemlist = serietv(item)
if itemlist[-1].action == "elenco_aggiornamenti_serietv":
if itemlist[-1].action == "serietv":
itemlist.pop()
# Continua la ricerca in caso di errore
@@ -308,99 +160,38 @@ def newest(categoria):
return itemlist
# =========== def pagina aggiornamenti =============
# ======== Ultimi Aggiornamenti ===========================
def elenco_aggiornamenti_serietv(item):
"""
def per la lista degli aggiornamenti
"""
logger.info("%s elenco_aggiornamenti_serietv log: %s" % (__channel__, item))
itemlist = []
# Carica la pagina
data = httptools.downloadpage(item.url).data
# Estrae i contenuti
#bloque = scrapertoolsV2.get_match(data, '<div class="entry">(.*?)<div class="clear"></div>')
bloque = scrapertoolsV2.find_single_match(data, '<div class="entry">(.*?)<div class="clear"></div>')
patron = '<span class="serieTitle".*?>(.*?)<.*?href="(.*?)".*?>(.*?)<'
matches = scrapertoolsV2.find_multiple_matches(bloque, patron)
for scrapedtitle, scrapedurl, scrapedepisodies in matches:
if "(SUB ITA)" in scrapedepisodies.upper():
lang = "SUB ITA"
scrapedepisodies = scrapedepisodies.replace('(SUB ITA)','')
else:
lang = "ITA"
scrapedepisodies = scrapedepisodies.replace(lang,'')
#num = scrapertoolsV2.find_single_match(scrapedepisodies, '(-\d+/)')
#if num:
# scrapedurl = scrapedurl.replace(num, "-episodi/")
scrapedtitle = scrapedtitle.replace("", "").replace('\xe2\x80\x93 ','').strip()
scrapedepisodies = scrapedepisodies.replace('\xe2\x80\x93 ','').strip()
itemlist.append(
Item(
channel=item.channel,
action="episodios",
contentType="tvshow",
title = "%s" % scrapedtitle, # %s" % (scrapedtitle, scrapedepisodies),
fulltitle = "%s %s" % (scrapedtitle, scrapedepisodies),
text_color = color5,
url = scrapedurl,
#show = "%s %s" % (scrapedtitle, scrapedepisodies),
extra=item.extra,
#lang = lang,
#data = data,
folder=True))
# locandine e trama e altro da tmdb se presente l'anno migliora la ricerca
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True, idioma_busqueda='it')
return itemlist
# =========== def per trovare i video =============
def findvideos(item):
logger.info("%s findvideos log: %s" % (__channel__, item))
itemlist = []
# Carica la pagina
data = item.url
matches = re.findall(r'a href="([^"]+)"[^>]*>[^<]+</a>', data, re.DOTALL)
data = []
for url in matches:
url, c = unshortenit.unshorten(url)
data.append(url)
try:
itemlist = servertools.find_video_items(data=str(data))
for videoitem in itemlist:
logger.info("Videoitemlist2: %s" % videoitem)
videoitem.title = "%s [%s]" % (item.contentTitle, videoitem.title)#"[%s] %s" % (videoitem.server, item.title) #"[%s]" % (videoitem.title)
videoitem.show = item.show
videoitem.contentTitle = item.contentTitle
videoitem.contentType = item.contentType
videoitem.channel = item.channel
videoitem.text_color = color5
#videoitem.language = item.language
videoitem.year = item.infoLabels['year']
videoitem.infoLabels['plot'] = item.infoLabels['plot']
except AttributeError:
logger.error("data doesn't contain expected URL")
# Controlla se i link sono validi
if __comprueba_enlaces__:
itemlist = servertools.check_list_links(itemlist, __comprueba_enlaces_num__)
# Requerido para FilterTools
# itemlist = filtertools.get_links(itemlist, item, list_language)
# Requerido para AutoPlay
autoplay.start(itemlist, item)
support.log()
itemlist =[]
itemlist = support.server(item, item.url)
"""
Questa parte funziona se non vanno bene le modifiche a support
"""
## support.log()
## itemlist =[]
## data= ''
## logger.info("Url item.url: [%s] " % item.url)
##
## urls = scrapertoolsV2.find_multiple_matches(item.url, r'href="([^"]+)"')
## itemlist = servertools.find_video_items(data=str(urls))
##
## for videoitem in itemlist:
## videoitem.title = item.title + ' - [COLOR limegreen][[/COLOR]'+ videoitem.title+ ' [COLOR limegreen]][/COLOR]'
## videoitem.fulltitle = item.fulltitle
## videoitem.thumbnail = item.thumbnail
## videoitem.show = item.show
## videoitem.plot = item.plot
## videoitem.channel = item.channel
## videoitem.contentType = item.contentType
##
## # Controlla se i link sono validi
## if __comprueba_enlaces__:
## itemlist = servertools.check_list_links(itemlist, __comprueba_enlaces_num__)
##
## # richiesto per AutoPlay
## autoplay.start(itemlist, item)
return itemlist

View File

@@ -136,7 +136,7 @@ def scrape(item, patron = '', listGroups = [], headers="", blacklist="", data=""
matches = scrapertoolsV2.find_multiple_matches(block, patron)
log('MATCHES =', matches)
known_keys = ['url', 'title', 'thumb', 'quality', 'year', 'plot', 'duration', 'genere', 'rating']
known_keys = ['url', 'title', 'episode', 'thumb', 'quality', 'year', 'plot', 'duration', 'genere', 'rating'] #by greko aggiunto episode
for match in matches:
if len(listGroups) > len(match): # to fix a bug
match = list(match)
@@ -152,8 +152,10 @@ def scrape(item, patron = '', listGroups = [], headers="", blacklist="", data=""
title = scrapertoolsV2.decodeHtmlentities(scraped["title"]).strip()
plot = scrapertoolsV2.htmlclean(scrapertoolsV2.decodeHtmlentities(scraped["plot"]))
if scraped["quality"]:
longtitle = '[B]' + title + '[/B] [COLOR blue][' + scraped["quality"] + '][/COLOR]'
if (scraped["quality"] and scraped["episode"]): # by greko aggiunto episode
longtitle = '[B]' + title + '[/B] - [B]' + scraped["episode"] + '[/B][COLOR blue][' + scraped["quality"] + '][/COLOR]' # by greko aggiunto episode
elif scraped["episode"]: # by greko aggiunto episode
longtitle = '[B]' + title + '[/B] - [B]' + scraped["episode"] + '[/B]' # by greko aggiunto episode
else:
longtitle = '[B]' + title + '[/B]'
@@ -438,7 +440,7 @@ def match(item, patron='', patron_block='', headers='', url=''):
def videolibrary(itemlist, item, typography=''):
if item.contentType == 'movie':
if item.contentType != 'episode':
action = 'add_pelicula_to_library'
extra = 'findvideos'
contentType = 'movie'
@@ -448,28 +450,25 @@ def videolibrary(itemlist, item, typography=''):
contentType = 'tvshow'
title = typo(config.get_localized_string(30161) + ' ' + typography)
if inspect.stack()[1][3] == 'findvideos' and contentType == 'movie' or inspect.stack()[1][3] != 'findvideos' and contentType != 'movie':
if config.get_videolibrary_support() and len(itemlist) > 0:
itemlist.append(
Item(channel=item.channel,
title=title,
contentType=contentType,
contentSerieName=item.fulltitle if contentType == 'tvshow' else '',
url=item.url,
action=action,
extra=extra,
contentTitle=item.fulltitle))
return itemlist
if config.get_videolibrary_support() and len(itemlist) > 0:
itemlist.append(
Item(channel=item.channel,
title=title,
contentType=contentType,
contentSerieName=item.fulltitle if contentType == 'tvshow' else '',
url=item.url,
action=action,
extra=extra,
contentTitle=item.fulltitle))
def nextPage(itemlist, item, data, patron, function_level=1):
# Function_level is useful if the function is called by another function.
# If the call is direct, leave it blank
next_page = scrapertoolsV2.find_single_match(data, patron)
if next_page != "":
if 'http' not in next_page:
next_page = scrapertoolsV2.find_single_match(item.url, 'https?://[a-z0-9.-]+') + next_page
if 'http' not in next_page:
next_page = scrapertoolsV2.find_single_match(item.url, 'https?://[a-z0-9.-]+') + next_page
log('NEXT= ', next_page)
if next_page != "":
@@ -484,17 +483,24 @@ def nextPage(itemlist, item, data, patron, function_level=1):
return itemlist
def server(item, data='', headers='', AutoPlay=True, CheckLinks=True):
__comprueba_enlaces__ = config.get_setting('comprueba_enlaces', item.channel)
log(__comprueba_enlaces__ )
__comprueba_enlaces_num__ = config.get_setting('comprueba_enlaces_num', item.channel)
log(__comprueba_enlaces_num__ )
if not data:
data = httptools.downloadpage(item.url, headers=headers).data
## fix by greko
# se inviamo un blocco di url dove cercare i video
if type(data) == list:
data = str(item.url)
else:
# se inviamo un singolo url dove cercare il video
data = item.url
## FINE fix by greko
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
@@ -529,4 +535,4 @@ def log(stringa1="", stringa2="", stringa3="", stringa4="", stringa5=""):
frame = inspect.stack()[1]
filename = frame[0].f_code.co_filename
filename = os.path.basename(filename)
logger.info("[" + filename + "] - [" + inspect.stack()[1][3] + "] " + str(stringa1) + str(stringa2) + str(stringa3) + str(stringa4) + str(stringa5))
logger.info("[" + filename + "] - [" + inspect.stack()[1][3] + "] " + str(stringa1) + str(stringa2) + str(stringa3) + str(stringa4) + str(stringa5))

View File

@@ -96,6 +96,7 @@ def getchanneltypes(view="thumb_"):
# viewmode="thumbnails"))
itemlist.append(Item(title=config.get_localized_string(70685), channel="community", action="mainlist", view=view,
category=title, channel_type="all", thumbnail=get_thumb("channels_community.png", view),
viewmode="thumbnails"))

View File

@@ -44,10 +44,18 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
elif 'vcrypt.net' in url:
from lib import unshortenit
data, status = unshortenit.unshorten(url)
logger.info("Data - Status zcrypt vcrypt.net: [%s] [%s] " %(data, status))
elif 'linkup' in url:
idata = httptools.downloadpage(url).data
data = scrapertoolsV2.find_single_match(idata, "<iframe[^<>]*src=\\'([^'>]*)\\'[^<>]*>")
#fix by greko inizio
if not data:
data = scrapertoolsV2.find_single_match(idata, 'action="(?:[^/]+.*?/[^/]+/([a-zA-Z0-9_]+))">')
if '/olink/' in url or '/delta/' in url or '/mango/' in url or '/now/' in url:
from lib import unshortenit
data, status = unshortenit.unshorten(url)
logger.info("Data - Status zcrypt linkup : [%s] [%s] " %(data, status))
# fix by greko fine
else:
data = ""
while host in url:
@@ -63,7 +71,7 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
else:
logger.info(" url duplicada=" + url)
patron = r"""(https?://(?:www\.)?(?:threadsphere\.bid|adf\.ly|q\.gs|j\.gs|u\.bb|ay\.gy|linkbucks\.com|any\.gs|cash4links\.co|cash4files\.co|dyo\.gs|filesonthe\.net|goneviral\.com|megaline\.co|miniurls\.co|qqc\.co|seriousdeals\.net|theseblogs\.com|theseforums\.com|tinylinks\.co|tubeviral\.com|ultrafiles\.net|urlbeat\.net|whackyvidz\.com|yyv\.co|adfoc\.us|lnx\.lu|sh\.st|href\.li|anonymz\.com|shrink-service\.it|rapidcrypt\.net)/[^"']+)"""
patron = r"""(https?://(?:www\.)?(?:threadsphere\.bid|adf\.ly|q\.gs|j\.gs|u\.bb|ay\.gy|linkbucks\.com|any\.gs|cash4links\.co|cash4files\.co|dyo\.gs|filesonthe\.net|goneviral\.com|megaline\.co|miniurls\.co|qqc\.co|seriousdeals\.net|theseblogs\.com|theseforums\.com|tinylinks\.co|tubeviral\.com|ultrafiles\.net|urlbeat\.net|whackyvidz\.com|yyv\.co|adfoc\.us|lnx\.lu|sh\.st|href\.li|anonymz\.com|shrink-service\.it|rapidcrypt\.netz|ecleneue\.com)/[^"']+)"""
logger.info(" find_videos #" + patron + "#")
matches = re.compile(patron).findall(page_url)

42
servers/deltabit.json Normal file
View File

@@ -0,0 +1,42 @@
{
"active": true,
"find_videos": {
"ignore_urls": [],
"patterns": [
{
"pattern": "https://deltabit.co/([A-z0-9]+)",
"url": "https://deltabit.co/\\1"
}
]
},
"free": true,
"id": "deltabit",
"name": "deltabit",
"settings": [
{
"default": false,
"enabled": true,
"id": "black_list",
"label": "@60654",
"type": "bool",
"visible": true
},
{
"default": 0,
"enabled": true,
"id": "favorites_servers_list",
"label": "@60655",
"lvalues": [
"No",
"1",
"2",
"3",
"4",
"5"
],
"type": "list",
"visible": false
}
],
"thumbnail": "https://deltabit.co/img/logo.png"
}

68
servers/deltabit.py Normal file
View File

@@ -0,0 +1,68 @@
# -*- coding: utf-8 -*-
import urllib
from core import httptools
from core import scrapertools
from platformcode import logger
import time
def test_video_exists(page_url):
logger.info("(page_url='%s')" % page_url)
data = httptools.downloadpage(page_url).data
if "Not Found" in data or "File Does not Exist" in data:
return False, "[deltabit] El fichero no existe o ha sido borrado"
return True, ""
def get_video_url(page_url, premium=False, user="", password="", video_password=""):
logger.info("(deltabit page_url='%s')" % page_url)
video_urls = []
data = httptools.downloadpage(page_url).data
data = data.replace('"', "'")
page_url_post = scrapertools.find_single_match(data, "<Form method='POST' action='([^']+)'>")
imhuman = "&imhuman=" + scrapertools.find_single_match(data, "name='imhuman' value='([^']+)'").replace(" ", "+")
post = urllib.urlencode({k: v for k, v in scrapertools.find_multiple_matches(data, "name='([^']+)' value='([^']*)'")}) + imhuman
time.sleep(6)
data = httptools.downloadpage(page_url_post, post=post).data
## logger.info("(data page_url='%s')" % data)
sources = scrapertools.find_single_match(data, 'sources: \[([^\]]+)\]')
for media_url in scrapertools.find_multiple_matches(sources, '"([^"]+)"'):
ext = scrapertools.get_filename_from_url(media_url)[-4:]
video_urls.append(["%s [deltabit]" % (ext), media_url])
return video_urls
## logger.info("deltabit url=" + page_url)
## data = httptools.downloadpage(page_url).data
## code = scrapertools.find_multiple_matches(data, '<input type="hidden" name="[^"]+" value="([^"]+)"')
## time.sleep(6)
## data = httptools.downloadpage(page_url+'?op='+code[0]+\
## '&id='+code[1]+'&fname='+code[2]+'&hash='+code[3]).data
##
## logger.info("DATA deltabit : %s" % data)
## https://deltabit.co/6zragsekoole?op=download1&usr_login=%27%27&id=6zragsekoole&fname=New.Amsterdam.2018.Episodio.1.Come.Posso.Aiutare.iTALiAN.WEBRip.x264-GeD.mkv&referer=%27%27&hash=24361-79-32-1557854113-cc51baafbf1530f43b746133fbd293ee
## https://deltabit.co/6zragsekoole?op=download1&usr_login=''&id=6zragsekoole&fname=New.Amsterdam.2018.Episodio.1.Come.Posso.Aiutare.iTALiAN.WEBRip.x264-GeD.mkv&referer=''&hash=24361-79-32-1557854113-cc51baafbf1530f43b746133fbd293ee
## video_urls = page_url+'?op='+code[0]+'&usr_login='+code[1]+'&id='+code[2]+'&fname='+code[3]+'&referer='+code[4]+'&hash='+code[5]
## logger.info("Delta bit [%s]: " % page_url)
## code = scrapertools.find_single_match(data, 'name="code" value="([^"]+)')
## hash = scrapertools.find_single_match(data, 'name="hash" value="([^"]+)')
## post = "op=download1&code=%s&hash=%s&imhuman=Proceed+to+video" %(code, hash)
## data1 = httptools.downloadpage("http://m.vidtome.stream/playvideo/%s" %code, post=post).data
## video_urls = []
## media_urls = scrapertools.find_multiple_matches(data1, 'file: "([^"]+)')
## for media_url in media_urls:
## ext = scrapertools.get_filename_from_url(media_url)[-4:]
## video_urls.append(["%s [vidtomestream]" % (ext), media_url])
## video_urls.reverse()
## for video_url in video_urls:
## logger.info("%s" % (video_url[0]))
## return video_urls

View File

@@ -10,6 +10,10 @@
{
"pattern": "https://fruitadblock.net/embed/([A-z0-9]+)",
"url": "http://streamango.com/embed/\\1"
},
{
"pattern": "https://streamangos.com/e/([A-z0-9]+)",
"url": "http://streamango.com/embed/\\1"
}
]
},
@@ -43,4 +47,4 @@
}
],
"thumbnail": "http://i.imgur.com/o8XR8fL.png"
}
}

View File

@@ -0,0 +1,41 @@
{
"active": true,
"find_videos": {
"ignore_urls": [],
"patterns": [
{
"pattern": "vidtome.stream/(?:embed-|)([A-z0-9]+)",
"url": "http://vidtome.stream/\\1.html"
}
]
},
"free": true,
"id": "vidtomestream",
"name": "vidtomestream",
"settings": [
{
"default": false,
"enabled": true,
"id": "black_list",
"label": "@60654",
"type": "bool",
"visible": true
},
{
"default": 0,
"enabled": true,
"id": "favorites_servers_list",
"label": "@60655",
"lvalues": [
"No",
"1",
"2",
"3",
"4",
"5"
],
"type": "list",
"visible": false
}
]
}

33
servers/vidtomestream.py Normal file
View File

@@ -0,0 +1,33 @@
# -*- coding: utf-8 -*-
# -*- copiato e adattato da vidtome -*-
# -*- by Greko -*-
from core import httptools
from core import scrapertools
from platformcode import logger
def test_video_exists(page_url):
logger.info("(page_url='%s')" % page_url)
data = httptools.downloadpage(page_url).data
if "Not Found" in data or "File Does not Exist" in data:
return False, "[vidtomestream] Il video non esiste o ha sido borrado"
return True, ""
def get_video_url(page_url, premium=False, user="", password="", video_password=""):
logger.info("url=" + page_url)
data = httptools.downloadpage(page_url).data
code = scrapertools.find_single_match(data, 'name="code" value="([^"]+)')
hash = scrapertools.find_single_match(data, 'name="hash" value="([^"]+)')
post = "op=download1&code=%s&hash=%s&imhuman=Proceed+to+video" %(code, hash)
data1 = httptools.downloadpage("http://m.vidtome.stream/playvideo/%s" %code, post=post).data
video_urls = []
media_urls = scrapertools.find_multiple_matches(data1, 'file: "([^"]+)')
for media_url in media_urls:
ext = scrapertools.get_filename_from_url(media_url)[-4:]
video_urls.append(["%s [vidtomestream]" % (ext), media_url])
video_urls.reverse()
for video_url in video_urls:
logger.info("%s" % (video_url[0]))
return video_urls

View File

@@ -10,6 +10,12 @@ from platformcode import logger, config
headers = [['User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:54.0) Gecko/20100101 Firefox/54.0']]
def test_video_exists(page_url):
logger.info("(page_url='%s')" % page_url)
data = httptools.downloadpage(page_url).data
if "Not Found" in data or "File was deleted" in data:
return False, "[wstream.py] El fichero no existe o ha sido borrado"
return True, ""
# Returns an array of possible video url's from the page_url
def get_video_url(page_url, premium=False, user="", password="", video_password=""):