KoD 1.3.1

- aggiunti nuovi canali: film4k, animealtadefinizione, streamingcommunity, animeuniverse , guardaserieICU
- HDmario ora supporta l'utilizzo di account
- Miglioramenti sezione news, è ora possibile raggruppare per canale o per contenuto, e settare l'ordinamento
- risolto il fastidioso problema per cui poteva capitare che la ricerca ripartisse dopo un refresh di kodi (tipicamente quando l'aggiornamento della videoteca finiva)
- alcuni fix ai canali
This commit is contained in:
marco
2020-08-06 19:56:57 +02:00
parent 5af023ad21
commit f04aa71d31
44 changed files with 1412 additions and 427 deletions

46
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
# This is a basic workflow to help you get started with Actions
name: Test Suite
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
workflow_dispatch:
schedule:
- cron: '30 17 * * *'
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
tests:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: 2.7
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install sakee
pip install html-testRunner
pip install parameterized
- name: Run tests
run: |
export PYTHONPATH=$GITHUB_WORKSPACE
export KODI_INTERACTIVE=0
export KODI_HOME=$GITHUB_WORKSPACE/tests/home
python tests/test_generic.py
- uses: actions/upload-artifact@v2
with:
name: report
path: reports/report.html

View File

@@ -1,8 +1,9 @@
<addon id="plugin.video.kod" name="Kodi on Demand" version="1.3" provider-name="KoD Team">
<addon id="plugin.video.kod" name="Kodi on Demand" version="1.3.1" provider-name="KoD Team">
<requires>
<!-- <import addon="script.module.libtorrent" optional="true"/> -->
<import addon="metadata.themoviedb.org"/>
<import addon="metadata.tvdb.com"/>
</requires>
<extension point="xbmc.python.pluginsource" library="default.py">
<provides>video</provides>
@@ -25,12 +26,11 @@
<screenshot>resources/media/themes/ss/2.png</screenshot>
<screenshot>resources/media/themes/ss/3.png</screenshot>
</assets>
<news>- Aggiunti i canali Mediaset Play e La 7.
- Riscritto Animeunity.
- Le stagioni concluse vengono ora escluse dall'aggiornamento della videoteca.
- Ora è possibile aggiornare gli episodi di Kod dal menu contestuale della Libreria di Kod (se non gestite da Kod verranno cercate)
- Fix Adesso in Onda su ATV
- Fix Vari</news>
<news>- aggiunti nuovi canali: film4k, animealtadefinizione, streamingcommunity, animeuniverse , guardaserieICU
- HDmario ora supporta l'utilizzo di account
- Miglioramenti sezione news, è ora possibile raggruppare per canale o per contenuto, e settare l'ordinamento
- risolto il fastidioso problema per cui poteva capitare che la ricerca ripartisse dopo un refresh di kodi (tipicamente quando l'aggiornamento della videoteca finiva)
- alcuni fix ai canali</news>
<description lang="it">Naviga velocemente sul web e guarda i contenuti presenti</description>
<disclaimer>[COLOR red]The owners and submitters to this addon do not host or distribute any of the content displayed by these addons nor do they have any affiliation with the content providers.[/COLOR]
[COLOR yellow]Kodi © is a registered trademark of the XBMC Foundation. We are not connected to or in any other way affiliated with Kodi, Team Kodi, or the XBMC Foundation. Furthermore, any software, addons, or products offered by us will receive no support in official Kodi channels, including the Kodi forums and various social networks.[/COLOR]</disclaimer>

View File

@@ -1,46 +1,48 @@
{
"altadefinizione01": "https://altadefinizione01.photo",
"altadefinizione01_link": "https://altadefinizione01.baby",
"altadefinizioneclick": "https://altadefinizione.productions",
"animeforce": "https://ww1.animeforce.org",
"animeleggendari": "https://animeora.com",
"animesaturn": "https://www.animesaturn.com",
"animestream": "https://www.animeworld.it",
"animesubita": "http://www.animesubita.org",
"animetubeita": "http://www.animetubeita.com",
"animeunity": "https://www.animeunity.it",
"animeworld": "https://www.animeworld.tv",
"casacinema": "https://www.casacinema.rest",
"cb01anime": "https://www.cineblog01.red/",
"cinemalibero": "https://cinemalibero.plus",
"cinetecadibologna": "http://cinestore.cinetecadibologna.it",
"dreamsub": "https://dreamsub.stream",
"dsda": "https://www.dsda.press/",
"fastsubita": "https://fastsubita.online",
"filmgratis": "https://www.filmaltadefinizione.tv",
"filmigratis": "https://filmigratis.org",
"filmsenzalimiticc": "https://www.filmsenzalimiti.tel",
"filmstreaming01": "https://filmstreaming01.com",
"guardaserie_stream": "https://guardaserie.store",
"guardaserieclick": "https://www.guardaserie.style",
"hd4me": "https://hd4me.net",
"netfreex": "https://www.netfreex.stream/",
"ilgeniodellostreaming": "https://ilgeniodellostreaming.tw",
"italiaserie": "https://italiaserie.org",
"mondoserietv": "https://mondoserietv.com",
"guardaserieIcu": "https://guardaserie.icu/",
"guardaserieCam": "https://guardaserie.cam",
"piratestreaming": "https://www.piratestreaming.biz",
"polpotv": "https://polpotv.live",
"pufimovies": "https://pufimovies.com",
"raiplay": "https://www.raiplay.it",
"seriehd": "https://seriehd.link",
"serietvonline": "https://serietvonline.work",
"serietvsubita": "http://serietvsubita.xyz",
"serietvu": "https://www.serietvu.link",
"streamtime": "https://t.me/s/StreamTime",
"tantifilm": "https://www.tantifilm.red",
"toonitalia": "https://toonitalia.org",
"vedohd": "https://vedohd.uno",
"vvvvid": "https://www.vvvvid.it"
"altadefinizione01": "https://www.altadefinizione01.photo",
"altadefinizione01_link": "https://altadefinizione01.wine",
"altadefinizioneclick": "https://altadefinizione.group",
"animealtadefinizione": "https://www.animealtadefinizione.it",
"animeforce": "https://ww1.animeforce.org",
"animeleggendari": "https://animeora.com",
"animesaturn": "https://www.animesaturn.com",
"animestream": "https://www.animeworld.it",
"animesubita": "http://www.animesubita.org",
"animetubeita": "http://www.animetubeita.com",
"animeunity": "https://www.animeunity.it",
"animeuniverse" : "https://www.animeuniverse.it/",
"animeworld": "https://www.animeworld.tv",
"casacinema": "https://www.casacinema.rest",
"cb01anime": "https://www.cineblog01.red",
"cinemalibero": "https://cinemalibero.plus",
"cinetecadibologna": "http://cinestore.cinetecadibologna.it",
"dreamsub": "https://dreamsub.stream",
"dsda": "https://www.dsda.press",
"fastsubita": "https://fastsubita.online",
"filmgratis": "https://www.filmaltadefinizione.tv",
"filmigratis": "https://filmigratis.org",
"filmsenzalimiticc": "https://www.filmsenzalimiti01.casa",
"filmstreaming01": "https://filmstreaming01.com",
"guardaserieCam": "https://guardaserie.cam",
"guardaserieIcu": "https://guardaserie.icu",
"guardaserie_stream": "https://guardaserie.store",
"guardaserieclick": "https://www.guardaserie.fit",
"hd4me": "https://hd4me.net",
"ilgeniodellostreaming": "https://ilgeniodellostreaming.gy",
"italiaserie": "https://italiaserie.org",
"mondoserietv": "https://mondoserietv.com",
"netfreex": "https://www.netfreex.fun",
"piratestreaming": "https://www.piratestreaming.movie",
"polpotv": "https://polpotv.life",
"raiplay": "https://www.raiplay.it",
"seriehd": "https://seriehd.productions",
"serietvonline": "https://serietvonline.store",
"serietvsubita": "http://serietvsubita.xyz",
"serietvu": "https://www.serietvu.link",
"streamingcommunity":"https://streamingcommunity.to",
"streamtime": "https://t.me/s/StreamTime",
"tantifilm": "https://www.tantifilm.rest",
"toonitalia": "https://toonitalia.org",
"vedohd": "https://vedohd.uno",
"vvvvid": "https://www.vvvvid.it"
}

View File

@@ -130,6 +130,7 @@ def newest(categoria):
if categoria == "peliculas":
item.url = host
item.action = "peliculas"
item.contentType = 'movie'
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":
itemlist.pop()

View File

@@ -88,6 +88,7 @@ def newest(categoria):
if categoria == "peliculas":
item.url = host
item.action = "peliculas"
item.contentType='movie'
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":

View File

@@ -22,7 +22,7 @@ from core.item import Item
from platformcode import config
def findhost():
data = support.httptools.downloadpage('https://altadefinizione-nuovo.link/').data
data = support.httptools.downloadpage('https://altadefinizione-nuovo.me/').data
host = support.scrapertools.find_single_match(data, '<div class="elementor-button-wrapper"> <a href="([^"]+)"')
return host
@@ -30,7 +30,6 @@ host = config.get_channel_url(findhost)
headers = [['Referer', host]]
@support.menu
def mainlist(item):
film = ['',
@@ -117,6 +116,7 @@ def newest(categoria):
try:
if categoria == "peliculas":
item.args = 'news'
item.contentType = 'movie'
item.url = host + "/nuove-uscite/"
item.action = "peliculas"
itemlist = peliculas(item)

View File

@@ -0,0 +1,21 @@
{
"id": "animealtadefinizione",
"name": "AnimealtAdefinizione",
"active": true,
"language": ["ita", "sub-ita"],
"thumbnail": "animealtadefinizione.png",
"banner": "animealtadefinizione.png",
"categories": ["anime", "sub-ita"],
"default_off": ["include_in_newest"],
"settings": [
{
"id": "perpage",
"type": "list",
"label": "Elementi per pagina",
"default": 3,
"enabled": true,
"visible": true,
"lvalues": ["20","30","40","50","60","70","80","90","100"]
}
]
}

View File

@@ -0,0 +1,125 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per animealtadefinizione
# ----------------------------------------------------------
from core import support
host = support.config.get_channel_url()
headers = [['Referer', host]]
perpage_list = ['20','30','40','50','60','70','80','90','100']
perpage = perpage_list[support.config.get_setting('perpage' , 'animealtadefinizione')]
epPatron = r'<td>\s*(?P<title>[^<]+)[^>]+>[^>]+>\s*<a href="(?P<url>[^"]+)"'
@support.menu
def mainlist(item):
anime=['/anime/',
('Tipo',['', 'menu', 'Anime']),
('Anno',['', 'menu', 'Anno']),
('Genere', ['', 'menu','Genere']),
('Ultimi Episodi',['', 'peliculas', 'last'])]
return locals()
@support.scrape
def menu(item):
action = 'peliculas'
data = support.match(item, patron= r'<a href="' + host + r'/category/' + item.args.lower() + r'/">' + item.args + r'</a><ul class="sub-menu">(.*?)</ul>').match
patronMenu = r'<a href="(?P<url>[^"]+)">(?P<title>[^<]+)<'
return locals()
def search(item, texto):
support.log(texto)
item.search = texto
try:
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("%s" % line)
return []
def newest(categoria):
support.log(categoria)
item = support.Item()
try:
if categoria == "anime":
item.url = host
item.args = "last"
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("{0}".format(line))
return []
@support.scrape
def peliculas(item):
if '/movie/' in item.url:
item.contentType = 'movie'
action='findvideos'
elif item.args == 'last':
item.contentType = 'episode'
action='findvideos'
else:
item.contentType = 'tvshow'
action='episodios'
if item.search:
query = 's'
searchtext = item.search
else:
query='category_name'
searchtext = item.url.split('/')[-2]
if not item.pag: item.pag = 1
anime=True
data = support.match(host + '/wp-admin/admin-ajax.php', post='action=itajax-sort&loop=main+loop&location=&thumbnail=1&rating=1sorter=recent&columns=4&numarticles='+perpage+'&paginated='+str(item.pag)+'&currentquery%5B'+query+'%5D='+searchtext).data.replace('\\','')
patron=r'<a href="(?P<url>[^"]+)"><img width="[^"]+" height="[^"]+" src="(?P<thumb>[^"]+)" class="[^"]+" alt="" title="(?P<title>.*?)\s+(?P<type>Movie)?\s*(?P<lang>Sub Ita|Ita)'
typeContentDict = {'movie':['movie']}
typeActionDict = {'findvideos':['movie']}
def ItemItemlistHook(item, itemlist):
if item.search:
itemlist = [ it for it in itemlist if ' Episodio ' not in it.title ]
if len(itemlist) == int(perpage):
item.pag += 1
itemlist.append(item.clone(title=support.typo(support.config.get_localized_string(30992), 'color kod bold'), action='peliculas'))
return itemlist
return locals()
@support.scrape
def episodios(item):
pagination = int(perpage)
patron = epPatron
return locals()
def findvideos(item):
itemlist = []
if item.contentType == 'movie':
matches = support.match(item, patron=epPatron).matches
for title, url in matches:
get_video_list(url, title, itemlist)
else:
get_video_list(item.url, 'Diretto', itemlist)
return support.server(item, itemlist=itemlist)
def get_video_list(url, title, itemlist):
from requests import get
if not url.startswith('http'): url = host + url
url = support.match(get(url).url, string=True, patron=r'file=([^$]+)').match
if 'http' not in url: url = 'http://' + url
itemlist.append(support.Item(title=title, url=url, server='directo', action='play'))
return itemlist

View File

@@ -16,20 +16,21 @@ def get_data(item, head=[]):
for h in head:
headers[h[0]] = h[1]
if not item.count: item.count = 0
matches = support.match(item, patron=r'<script>(.*?location.href=".*?(http[^"]+)";)</').match
if matches:
jstr, location = matches
item.url=support.re.sub(r':\d+', '', location).replace('http://','https://')
if not config.get_setting('key', item.channel) and jstr:
jshe = 'var document = {}, location = {}'
aesjs = str(support.match(host + '/aes.min.js').data)
js_fix = 'window.toHex = window.toHex || function(){for(var d=[],d=1==arguments.length&&arguments[0].constructor==Array?arguments[0]:arguments,e="",f=0;f<d.length;f++)e+=(16>d[f]?"0":"")+d[f].toString(16);return e.toLowerCase()}'
jsret = 'return document.cookie'
key_data = js2py.eval_js( 'function (){ ' + jshe + '\n' + aesjs + '\n' + js_fix + '\n' + jstr + '\n' + jsret + '}' )()
key = key_data.split(';')[0]
if not config.get_setting('key', item.channel):
matches = support.match(item, patron=r'<script>(.*?location.href=".*?(http[^"]+)";)</').match
if matches:
jstr, location = matches
item.url=support.re.sub(r':\d+', '', location).replace('http://','https://')
if jstr:
jshe = 'var document = {}, location = {}'
aesjs = str(support.match(host + '/aes.min.js').data)
js_fix = 'window.toHex = window.toHex || function(){for(var d=[],d=1==arguments.length&&arguments[0].constructor==Array?arguments[0]:arguments,e="",f=0;f<d.length;f++)e+=(16>d[f]?"0":"")+d[f].toString(16);return e.toLowerCase()}'
jsret = 'return document.cookie'
key_data = js2py.eval_js( 'function (){ ' + jshe + '\n' + aesjs + '\n' + js_fix + '\n' + jstr + '\n' + jsret + '}' )()
key = key_data.split(';')[0]
# save Key in settings
config.set_setting('key', key, item.channel)
# save Key in settings
config.set_setting('key', key, item.channel)
# set cookie
headers['cookie'] = config.get_setting('key', item.channel)

View File

@@ -0,0 +1,21 @@
{
"id": "animeuniverse",
"name": "AnimeUniverse",
"active": true,
"language": ["ita", "sub-ita"],
"thumbnail": "animeuniverse.png",
"banner": "animeuniverse.png",
"categories": ["anime", "sub-ita"],
"default_off": ["include_in_newest"],
"settings": [
{
"id": "perpage",
"type": "list",
"label": "Elementi per pagina",
"default": 3,
"enabled": true,
"visible": true,
"lvalues": ["20","30","40","50","60","70","80","90","100"]
}
]
}

126
channels/animeuniverse.py Normal file
View File

@@ -0,0 +1,126 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per animeuniverse
# ----------------------------------------------------------
from core import support
host = support.config.get_channel_url()
headers = {}
perpage_list = ['20','30','40','50','60','70','80','90','100']
perpage = perpage_list[support.config.get_setting('perpage' , 'animeuniverse')]
epPatron = r'<td>\s*(?P<title>[^<]+)[^>]+>[^>]+>\s*<a href="(?P<url>[^"]+)"'
@support.menu
def mainlist(item):
anime=['/anime/',
('Tipo',['', 'menu', 'Anime']),
('Anno',['', 'menu', 'Anno']),
('Genere', ['', 'menu','Genere']),
('Ultimi Episodi',['/2/', 'peliculas', 'last']),
('Hentai', ['/hentai/', 'peliculas'])]
return locals()
@support.scrape
def menu(item):
action = 'peliculas'
data = support.match(item, patron= item.args + r'</a><ul class="sub-menu">(.*?)</ul>').match
patronMenu = r'<a href="(?P<url>[^"]+)">(?P<title>[^<]+)<'
return locals()
def search(item, texto):
support.log(texto)
item.search = texto
try:
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("%s" % line)
return []
def newest(categoria):
support.log(categoria)
item = support.Item()
try:
if categoria == "anime":
item.url = host
item.args = "last"
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.logger.error("{0}".format(line))
return []
@support.scrape
def peliculas(item):
if '/mos/' in item.url:
item.contentType = 'movie'
action='findvideos'
elif item.args == 'last':
query='cat%5D=1&currentquery%5Bcategory__not_in%5D%5B'
searchtext=''
item.contentType = 'episode'
action='findvideos'
else:
item.contentType = 'tvshow'
action='episodios'
if item.search:
query = 's'
searchtext = item.search
if not query:
query='category_name'
searchtext = item.url.split('/')[-2] if item.url != host else ''
if not item.pag: item.pag = 1
anime=True
blacklist=['Altri Hentai']
data = support.match(host + '/wp-content/themes/animeuniverse/functions/ajax.php', post='sorter=recent&location=&loop=main+loop&action=sort&numarticles='+perpage+'&paginated='+str(item.pag)+'&currentquery%5B'+query+'%5D='+searchtext+'&thumbnail=1').data.replace('\\','')
patron=r'<a href="(?P<url>[^"]+)"><img width="[^"]+" height="[^"]+" src="(?P<thumb>[^"]+)" class="[^"]+" alt="" title="(?P<title>.*?)\s*(?P<lang>Sub ITA|ITA)?(?:"| \[)'
def ItemItemlistHook(item, itemlist):
if len(itemlist) == int(perpage) - len(blacklist):
item.pag += 1
itemlist.append(item.clone(title=support.typo(support.config.get_localized_string(30992), 'color kod bold'), action='peliculas'))
return itemlist
return locals()
@support.scrape
def episodios(item):
pagination = int(perpage)
patron = epPatron
return locals()
def findvideos(item):
itemlist = []
if item.contentType == 'movie':
matches = support.match(item, patron=epPatron).matches
for title, url in matches:
get_video_list(url, title, itemlist)
else:
get_video_list(item.url, 'Diretto', itemlist)
return support.server(item, itemlist=itemlist)
def get_video_list(url, title, itemlist):
from requests import get
if not url.startswith('http'): url = host + url
url = support.match(get(url).url, string=True, patron=r'file=([^$]+)').match
if 'http' not in url: url = 'http://' + url
itemlist.append(support.Item(title=title, url=url, server='directo', action='play'))
return itemlist

View File

@@ -24,24 +24,25 @@ def get_data(item, head=[]):
for h in head:
headers[h[0]] = h[1]
if not item.count: item.count = 0
matches = support.match(item, patron=r'<script>(.*?location.href=".*?(http[^"]+)";)</').match
if matches:
jstr, location = matches
item.url=support.re.sub(r':\d+', '', location).replace('http://','https://')
if not config.get_setting('key', item.channel) and jstr:
jshe = 'var document = {}, location = {}'
aesjs = str(support.match(host + '/aes.min.js').data)
js_fix = 'window.toHex = window.toHex || function(){for(var d=[],d=1==arguments.length&&arguments[0].constructor==Array?arguments[0]:arguments,e="",f=0;f<d.length;f++)e+=(16>d[f]?"0":"")+d[f].toString(16);return e.toLowerCase()}'
jsret = 'return document.cookie'
key_data = js2py.eval_js( 'function (){ ' + jshe + '\n' + aesjs + '\n' + js_fix + '\n' + jstr + '\n' + jsret + '}' )()
key = key_data.split(';')[0]
if not config.get_setting('key', item.channel):
matches = support.match(item, patron=r'<script>(.*?location.href=".*?(http[^"]+)";)</').match
if matches:
jstr, location = matches
item.url=support.re.sub(r':\d+', '', location).replace('http://','https://')
if jstr:
jshe = 'var document = {}, location = {}'
aesjs = str(support.match(host + '/aes.min.js').data)
js_fix = 'window.toHex = window.toHex || function(){for(var d=[],d=1==arguments.length&&arguments[0].constructor==Array?arguments[0]:arguments,e="",f=0;f<d.length;f++)e+=(16>d[f]?"0":"")+d[f].toString(16);return e.toLowerCase()}'
jsret = 'return document.cookie'
key_data = js2py.eval_js( 'function (){ ' + jshe + '\n' + aesjs + '\n' + js_fix + '\n' + jstr + '\n' + jsret + '}' )()
key = key_data.split(';')[0]
# save Key in settings
config.set_setting('key', key, item.channel)
# save Key in settings
config.set_setting('key', key, item.channel)
# set cookie
headers['cookie'] = config.get_setting('key', item.channel)
res = support.match(item, headers=headers, patron=r';\s*location.href="([^"]+)"')
res = support.match(item, headers=headers, patron=r';\s*location.href=".*?(http[^"]+)"')
if res.match:
item.url= res.match.replace('http://','https://')
data = support.match(item, headers=headers).data
@@ -73,8 +74,8 @@ def mainlist(item):
def genres(item):
action = 'peliculas'
data = get_data(item)
patronBlock = r'<button class="btn btn-sm btn-default dropdown-toggle" data-toggle="dropdown"> Generi <span.[^>]+>(?P<block>.*?)</ul>'
patronMenu = r'<input.*?name="(?P<name>[^"]+)" value="(?P<value>[^"]+)"\s*>[^>]+>(?P<title>[^<]+)<\/label>'
patronBlock = r'dropdown[^>]*>\s*Generi\s*<span.[^>]+>(?P<block>.*?)</ul>'
patronMenu = r'<input.*?name="(?P<name>[^"]+)" value="(?P<value>[^"]+)"\s*>[^>]+>(?P<title>[^<]+)</label>'
def itemHook(item):
item.url = host + '/filter?' + item.name + '=' + item.value + '&sort='
@@ -152,7 +153,7 @@ def peliculas(item):
action='episodios'
# Controlla la lingua se assente
patronNext=r'href="([^"]+)" rel="next"'
patronNext=r'</span></a><a href="([^"]+)"'
typeContentDict={'movie':['movie', 'special']}
typeActionDict={'findvideos':['movie', 'special']}
def itemHook(item):
@@ -172,8 +173,7 @@ def episodios(item):
anime=True
pagination = 50
data = get_data(item)
support.log(data)
patronBlock= r'<div class="server\s*active\s*"(?P<block>.*?)<div class="server'
patronBlock= r'<div class="server\s*active\s*"(?P<block>.*?)(?:<div class="server|<link)'
patron = r'<li[^>]*>\s*<a.*?href="(?P<url>[^"]+)"[^>]*>(?P<episode>[^<]+)<'
def itemHook(item):
item.number = support.re.sub(r'\[[^\]]+\]', '', item.title)
@@ -192,7 +192,7 @@ def findvideos(item):
data = resp.data
for ID, name in resp.matches:
if not item.number: item.number = support.match(item.title, patron=r'(\d+) -').match
match = support.match(data, patronBlock=r'data-name="' + ID + r'"[^>]+>(.*?)<div class="(?:server|download)', patron=r'data-id="([^"]+)" data-episode-num="' + (item.number if item.number else '1') + '"' + r'.*?href="([^"]+)"').match
match = support.match(data, patronBlock=r'data-name="' + ID + r'"[^>]+>(.*?)(?:<div class="(?:server|download)|link)', patron=r'data-id="([^"]+)" data-episode-num="' + (item.number if item.number else '1') + '"' + r'.*?href="([^"]+)"').match
if match:
epID, epurl = match
if 'vvvvid' in name.lower():

View File

@@ -31,7 +31,8 @@ def mainlist(item):
def peliculas(item):
action = 'episodios'
if item.args == 'newest':
patron = r'<span class="serieTitle" style="font-size:20px">(?P<title>[^<]+)\s*<a href="(?P<url>[^"]+)"[^>]*>\s?(?P<episode>\d+[×x]\d+-\d+|\d+[×x]\d+) (?P<title2>[^<]+)\s?\(?(?P<lang>SUB ITA)?\)?</a>'
item.contentType = 'episode'
patron = r'<span class="serieTitle" style="font-size:20px">(?P<title>[^<]+) \s*<a href="(?P<url>[^"]+)"[^>]*>\s?(?P<episode>\d+[×x]\d+-\d+|\d+[×x]\d+) (?P<title2>[^<\(]+)\s?\(?(?P<lang>SUB ITA)?\)?</a>'
pagination = ''
else:
patron = r'<div class="post-thumb">.*?\s<img src="(?P<thumb>[^"]+)".*?><a href="(?P<url>[^"]+)"[^>]+>(?P<title>.+?)\s?(?: Serie Tv)?\s?\(?(?P<year>\d{4})?\)?<\/a><\/h2>'

11
channels/film4k.json Normal file
View File

@@ -0,0 +1,11 @@
{
"id": "film4k",
"name": "Film4k",
"language": ["ita"],
"active": true,
"thumbnail": "film4k.png",
"banner": "film4k.png",
"categories": ["tvshow", "movie", "anime"],
"not_active": ["include_in_newest_peliculas", "include_in_newest_anime", "include_in_newest_series"],
"settings": []
}

82
channels/film4k.py Normal file
View File

@@ -0,0 +1,82 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per film4k
# ------------------------------------------------------------
from core import support
from platformcode import logger, config
def findhost():
return support.httptools.downloadpage('https://film4k-nuovo.link').url
host = config.get_channel_url(findhost)
@support.menu
def mainlist(item):
film = ['movies',
('Qualità', ['', 'menu', 'quality']),
('Generi', ['movies', 'menu', 'genres']),
('Anno', ['movies', 'menu', 'releases']),
('Più popolari', ['trending/?get=movies', 'peliculas']),
('Più votati', ['ratings/?get=movies', 'peliculas'])]
tvshow = ['/tvshows',
('Più popolari', ['trending/?get=tv', 'peliculas']),
('Più votati', ['ratings/?get=tv', 'peliculas'])]
return locals()
def search(item, text):
logger.info()
item.url = item.url + "/?s=" + text
try:
return support.dooplay_search(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def peliculas(item):
if 'anime' in item.url:
return support.dooplay_peliculas(item, True)
else:
return support.dooplay_peliculas(item, False)
def episodios(item):
itemlist = support.dooplay_get_episodes(item)
return itemlist
def findvideos(item):
itemlist = []
if item.contentType == 'episode':
linkHead = support.httptools.downloadpage(item.url, only_headers=True).headers['link']
epId = support.scrapertools.find_single_match(linkHead, r'\?p=([0-9]+)>')
for link in support.dooplay_get_links(item, host, paramList=[['tv', epId, 1, 'title', 'server']]):
itemlist.append(
item.clone(action="play", url=link['url']))
else:
for link, quality in support.match(item.url, patron="(" + host + """links/[^"]+).*?class="quality">([^<]+)""").matches:
srv = support.servertools.find_video_items(data=support.httptools.downloadpage(link).data)
for s in srv:
s.quality = quality
itemlist.extend(srv)
return support.server(item, itemlist=itemlist)
@support.scrape
def menu(item):
action = 'peliculas'
if item.args in ['genres','releases']:
patronBlock = r'<nav class="' + item.args + r'">(?P<block>.*?)</nav'
patronMenu= r'<a href="(?P<url>[^"]+)"[^>]*>(?P<title>[^<]+)<'
else:
patronBlock = r'class="main-header">(?P<block>.*?)headitems'
patronMenu = r'(?P<url>' + host + r'quality/[^/]+/\?post_type=movies)">(?P<title>[^<]+)'
return locals()

View File

@@ -10,7 +10,7 @@ from core.item import Item
from platformcode import config
def findhost():
page = httptools.downloadpage("https://www.filmpertutti.group/").data
page = httptools.downloadpage("https://filmpertutti.nuovo.live/").data
url = scrapertools.find_single_match(page, 'Il nuovo indirizzo di FILMPERTUTTI è <a href="([^"]+)')
return url

View File

@@ -2,7 +2,7 @@
"id": "guardaserieIcu",
"name": "Guarda Serie Icu",
"language": ["ita", "sub-ita"],
"active": false,
"active": true,
"thumbnail": "https://raw.githubusercontent.com/32Dexter/DexterRepo/master/media/guarda_serie.jpg",
"banner": "",
"categories": ["tvshow"],

View File

@@ -81,8 +81,12 @@ def live(item):
urls = []
if it['tuningInstruction'] and not it['mediasetstation$digitalOnly']:
guide=current_session.get('https://static3.mediasetplay.mediaset.it/apigw/nownext/' + it['callSign'] + '.json').json()['response']
if 'restartUrl' in guide['currentListing']:
urls = [guide['currentListing']['restartUrl']]
else:
for key in it['tuningInstruction']['urn:theplatform:tv:location:any']:
urls += key['publicUrls']
plot = support.typo(guide['currentListing']['mediasetlisting$epgTitle'],'bold') + '\n' + guide['currentListing']['mediasetlisting$shortDescription'] + '\n' + guide['currentListing']['description'] + '\n\n' + support.typo('A Seguire:' + guide['nextListing']['mediasetlisting$epgTitle'], 'bold')
for key in it['tuningInstruction']['urn:theplatform:tv:location:any']: urls += key['publicUrls']
itemlist.append(item.clone(title=support.typo(it['title'], 'bold'),
fulltitle=it['title'],
show=it['title'],
@@ -209,7 +213,6 @@ def play(item):
item.license = lic_url % support.match(sec_data, patron=r'pid=([^|]+)').match
data = support.match(sec_data, patron=r'<video src="([^"]+)').match
support.log('LICENSE:',item.license)
return support.servertools.find_video_items(item, data=data)
def subBrand(json):

View File

@@ -72,7 +72,7 @@ def peliculas(item):
elif item.contentType == 'episode':
pagination = 35
action = 'findvideos'
patron = r'<td><a href="(?P<url>[^"]+)"(?:[^>]+)?>\s?(?P<title>[^<]+)(?P<episode>[\d\-x]+)?(?P<title2>[^<]+)?<'
patron = r'<td><a href="(?P<url>[^"]+)"(?:[^>]+)?>\s?(?P<title>.*?)(?P<episode>\d+x\d+)[ ]?(?P<title2>[^<]+)?<'
elif item.contentType == 'tvshow':
# SEZIONE Serie TV- Anime - Documentari
@@ -109,10 +109,9 @@ def peliculas(item):
@support.scrape
def episodios(item):
support.log()
action = 'findvideos'
patronBlock = r'<table>(?P<block>.*?)<\/table>'
patron = r'<tr><td>(?:[^<]+)[ ](?:Parte)?(?P<episode>\d+x\d+|\d+)(?:|[ ]?(?P<title2>.+?)?(?:avi)?)<(?P<url>.*?)</td><tr>'
patron = r'<tr><td>(?P<title>.*?)?[ ](?:Parte)?(?P<episode>\d+x\d+|\d+)(?:|[ ]?(?P<title2>.+?)?(?:avi)?)<(?P<url>.*?)</td><tr>'
def itemlistHook(itemlist):
for i, item in enumerate(itemlist):
ep = support.match(item.title, patron=r'\d+x(\d+)').match

View File

@@ -276,6 +276,8 @@ def newest(categoria):
try:
if categoria == "series":
itemlist = peliculas_tv(item)
if itemlist[-1].action == 'peliculas_tv':
itemlist.pop(-1)
except:
import sys

View File

@@ -0,0 +1,10 @@
{
"id": "streamingcommunity",
"name": "Streaming Community",
"active": true,
"language": ["ita"],
"thumbnail": "streamingcommunity.png",
"banner": "streamingcommunity.png",
"categories": ["movie","tvshow"],
"settings": []
}

View File

@@ -0,0 +1,183 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Canale per AnimeUnity
# ------------------------------------------------------------
import requests, json, copy
from core import support, jsontools
from specials import autorenumber
try: from lib import cloudscraper
except: from lib import cloudscraper
host = support.config.get_channel_url()
session=requests.Session()
response = session.get(host)
csrf_token = support.match(response.text, patron= 'name="csrf-token" content="([^"]+)"').match
headers = {'content-type': 'application/json;charset=UTF-8',
'x-csrf-token': csrf_token,
'Cookie' : '; '.join([x.name + '=' + x.value for x in response.cookies])}
@support.menu
def mainlist(item):
film=['',
('Generi',['/film','genres']),
('Titoli del Momento',['/film','peliculas',0]),
('Novità',['/film','peliculas',1]),
('Popolari',['/film','peliculas',2])]
tvshow=['',
('Generi',['/serie-tv','genres']),
('Titoli del Momento',['/serie-tv','peliculas',0]),
('Novità',['/serie-tv','peliculas',1]),
('Popolari',['/serie-tv','peliculas',2])]
search=''
return locals()
def genres(item):
support.log()
itemlist = []
data = support.scrapertools.decodeHtmlentities(support.match(item).data)
args = support.match(data, patronBlock=r'genre-options-json="([^\]]+)\]', patron=r'name"\s*:\s*"([^"]+)').matches
for arg in args:
itemlist.append(item.clone(title=support.typo(arg, 'bold'), args=arg, action='peliculas'))
support.thumb(itemlist, genre=True)
return itemlist
def search(item, text):
support.log('search', item)
item.search = text
try:
return peliculas(item)
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.log('search log:', line)
return []
def newest(category):
support.log(category)
itemlist = []
item = support.Item()
item.args = 1
if category == 'peliculas':
item.url = host + '/film'
else:
item.url = host + '/serie-tv'
try:
itemlist = peliculas(item)
if itemlist[-1].action == 'peliculas':
itemlist.pop()
# Continua la ricerca in caso di errore
except:
import sys
for line in sys.exc_info():
support.log(line)
return []
return itemlist
def peliculas(item):
support.log()
itemlist = []
videoType = 'movie' if item.contentType == 'movie' else 'tv'
page = item.page if item.page else 0
offset = page * 60
if type(item.args) == int:
data = support.scrapertools.decodeHtmlentities(support.match(item).data)
records = json.loads(support.match(data, patron=r'slider-title titles-json="(.*?)" slider-name="').matches[item.args])
elif not item.search:
payload = json.dumps({'type': videoType, 'offset':offset, 'genre':item.args})
records = json.loads(requests.post(host + '/infinite/browse', headers=headers, data=payload).json()['records'])
else:
records = requests.get(host + '/search?q=' + item.search + '&live=true', headers=headers).json()['records']
if records and type(records[0]) == list:
js = []
for record in records:
js += record
else:
js = records
for it in js:
title, lang = support.match(it['name'], patron=r'([^\[|$]+)(?:\[([^\]]+)\])?').match
if not lang:
lang = 'ITA'
itm = item.clone(title=support.typo(title,'bold') + support.typo(lang,'_ [] color kod bold'))
itm.type = it['type']
itm.thumbnail = 'https://image.tmdb.org/t/p/w500' + it['images'][0]['url']
itm.fanart = 'https://image.tmdb.org/t/p/w1280' + it['images'][2]['url']
itm.plot = it['plot']
itm.infoLabels['tmdb_id'] = it['tmdb_id']
itm.language = lang
if itm.type == 'movie':
itm.contentType = 'movie'
itm.fulltitle = itm.show = itm.contentTitle = title
itm.contentSerieName = ''
itm.action = 'findvideos'
itm.url = host + '/watch/%s' % it['id']
else:
itm.contentType = 'tvshow'
itm.contentTitle = ''
itm.fulltitle = itm.show = itm.contentSerieName = title
itm.action = 'episodios'
itm.season_count = it['seasons_count']
itm.url = host + '/titles/%s-%s' % (it['id'], it['slug'])
itemlist.append(itm)
if len(itemlist) >= 60:
itemlist.append(item.clone(title=support.typo(support.config.get_localized_string(30992), 'color kod bold'), thumbnail=support.thumb(), page=page + 1))
support.tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
def episodios(item):
support.log()
itemlist = []
js = json.loads(support.match(item.url, patron=r'seasons="([^"]+)').match.replace('&quot;','"'))
support.log(js)
for episodes in js:
for it in episodes['episodes']:
support.log(it)
itemlist.append(
support.Item(channel=item.channel,
title=support.typo(str(episodes['number']) + 'x' + str(it['number']).zfill(2) + ' - ' + it['name'], 'bold'),
episode = it['number'],
season=episodes['number'],
thumbnail='https://image.tmdb.org/t/p/w1280' + it['images'][0]['url'],
fanart='https://image.tmdb.org/t/p/w1280' + it['images'][0]['url'],
plot=it['plot'],
action='findvideos',
contentType='episode',
url=host + '/watch/' + str(episodes['title_id']) + '?e=' + str(it['id'])))
support.videolibrary(itemlist, item)
support.download(itemlist, item)
return itemlist
def findvideos(item):
support.log()
itemlist=[]
url = support.match(support.match(item).data.replace('&quot;','"').replace('\\',''), patron=r'video_url"\s*:\s*"([^"]+)"').match
playlist = support.match(url, patron=r'\./([^.]+)').matches
for res in playlist:
itemlist.append(item.clone(title='Diretto', server='directo', url=url.replace('playlist',res), quality=res, action='play'))
return support.server(item, itemlist=itemlist)

View File

@@ -21,8 +21,6 @@ host = config.get_channel_url(findhost)
headers = [['Referer', host]]
@support.menu
def mainlist(item):
log()
@@ -147,13 +145,10 @@ def search(item, texto):
def newest(categoria):
if categoria == 'series':
item = Item(url=host + '/aggiornamenti-giornalieri-serie-tv-2')
item.contentType = 'tvshow'
patronBlock = 'Aggiornamenti Giornalieri Serie TV.*?<div class="sp-body folded">(?P<block>.*?)</div>'
patron = '<p>(?P<title>.*?)\((?P<year>[0-9]{4})-?\)\s*streaming.*?href="(?P<url>[^"]+)'
def itemHook(item):
item.title = item.contentTitle = item.fulltitle = item.contentSerieName = item.contentTitle = scrapertools.htmlclean(item.title)
return item
data = support.match(item).data.replace('<u>','').replace('</u>','')
item.contentType = 'episode'
patronBlock = r'Aggiornamenti Giornalieri Serie TV.*?<div class="sp-body folded">(?P<block>.*?)</div>'
patron = r'<p>(?P<title>.*?)\((?P<year>[0-9]{4})[^\)]*\)[^<]+<a href="(?P<url>[^"]+)">(?P<episode>[^ ]+) (?P<lang>[Ss][Uu][Bb].[Ii][Tt][Aa])?(?P<title2>[^<]+)?'
return locals()

View File

@@ -308,7 +308,7 @@ def thumb(item_or_itemlist=None, genre=False, thumb=''):
'noir':['noir'],
'popular' : ['popolari','popolare', 'più visti'],
'thriller':['thriller'],
'top_rated' : ['fortunato', 'votati', 'lucky'],
'top_rated' : ['fortunato', 'votati', 'lucky', 'top'],
'on_the_air' : ['corso', 'onda', 'diretta', 'dirette'],
'western':['western'],
'vos':['sub','sub-ita'],

View File

@@ -64,10 +64,11 @@ def hdpass_get_servers(item):
data = httptools.downloadpage(url, CF=False).data
patron_res = '<div class="buttons-bar resolutions-bar">(.*?)<div class="buttons-bar'
patron_mir = '<div class="buttons-bar hosts-bar">(.*?)<div id="fake'
patron_option = r'<a href="([^"]+?)".*?>([^<]+?)</a>'
patron_mir = '<div class="buttons-bar hosts-bar">(.*?)<div id="main-player'
patron_option = r'<a href="([^"]+?)"[^>]+>([^<]+?)</a'
res = scrapertools.find_single_match(data, patron_res)
# dbg()
with futures.ThreadPoolExecutor() as executor:
thL = []
@@ -284,8 +285,8 @@ def scrapeBlock(item, args, block, patron, headers, action, pagination, debug, t
# make formatted Title [longtitle]
s = ' - '
title = episode + (s if episode and title else '') + title
longtitle = title + (s if title and title2 else '') + title2 + '\n'
# title = episode + (s if episode and title else '') + title
longtitle = episode + (s if episode and (title or title2) else '') + title + (s if title and title2 else '') + title2
if sceneTitle:
from lib.guessit import guessit
@@ -475,6 +476,9 @@ def scrape(func):
if 'itemlistHook' in args:
itemlist = args['itemlistHook'](itemlist)
if 'ItemItemlistHook' in args:
itemlist = args['ItemItemlistHook'](item, itemlist)
# if url may be changed and channel has findhost to update
if 'findhost' in func.__globals__ and not itemlist:
logger.info('running findhost ' + func.__module__)
@@ -541,14 +545,15 @@ def scrape(func):
return wrapper
def dooplay_get_links(item, host):
def dooplay_get_links(item, host, paramList=[]):
# get links from websites using dooplay theme and dooplay_player
# return a list of dict containing these values: url, title and server
data = httptools.downloadpage(item.url).data.replace("'", '"')
patron = r'<li id="player-option-[0-9]".*?data-type="([^"]+)" data-post="([^"]+)" data-nume="([^"]+)".*?<span class="title".*?>([^<>]+)</span>(?:<span class="server">([^<>]+))?'
matches = scrapertools.find_multiple_matches(data, patron)
if not paramList:
data = httptools.downloadpage(item.url).data.replace("'", '"')
patron = r'<li id="player-option-[0-9]".*?data-type="([^"]+)" data-post="([^"]+)" data-nume="([^"]+)".*?<span class="title".*?>([^<>]+)</span>(?:<span class="server">([^<>]+))?'
matches = scrapertools.find_multiple_matches(data, patron)
else:
matches = paramList
ret = []
for type, post, nume, title, server in matches:
@@ -987,6 +992,7 @@ def match_dbg(data, patron):
def download(itemlist, item, typography='', function_level=1, function=''):
if config.get_setting('downloadenabled'):
if not typography: typography = 'color kod bold'
if item.contentType == 'movie':
@@ -995,9 +1001,14 @@ def download(itemlist, item, typography='', function_level=1, function=''):
elif item.contentType == 'episode':
from_action = 'findvideos'
title = typo(config.get_localized_string(60356), typography) + ' - ' + item.title
elif item.contentType == 'tvshow':
from_action = 'episodios'
elif item.contentType in 'tvshow':
if item.channel == 'community' and config.get_setting('show_seasons', item.channel):
from_action = 'season'
else:
from_action = 'episodios'
title = typo(config.get_localized_string(60355), typography)
elif item.contentType in 'season':
from_action = 'get_seasons'
else: # content type does not support download
return itemlist
@@ -1016,7 +1027,7 @@ def download(itemlist, item, typography='', function_level=1, function=''):
break
else:
show = False
if show:
if show and item.contentType != 'season':
itemlist.append(
Item(channel='downloads',
from_channel=item.channel,
@@ -1207,8 +1218,13 @@ def server(item, data='', itemlist=[], headers='', AutoPlay=True, CheckLinks=Tru
checklinks_number = config.get_setting('checklinks_number')
verifiedItemlist = servertools.check_list_links(verifiedItemlist, checklinks_number)
if AutoPlay and not 'downloads' in inspect.stack()[3][1] or not 'downloads' in inspect.stack()[3][1] or not inspect.stack()[4][1]:
autoplay.start(verifiedItemlist, item)
try:
if AutoPlay and not 'downloads' in inspect.stack()[3][1] or not 'downloads' in inspect.stack()[3][1] or not inspect.stack()[4][1]:
autoplay.start(verifiedItemlist, item)
except:
import traceback
logger.error(traceback.format_exc())
pass
if Videolibrary and item.contentChannel != 'videolibrary':
videolibrary(verifiedItemlist, item)
@@ -1247,7 +1263,7 @@ def log(*args):
def channel_config(item, itemlist):
from channelselector import get_thumb
from channelselector import get_thumb
itemlist.append(
Item(channel='setting',
action="channel_config",

View File

@@ -988,6 +988,7 @@ def add_movie(item):
@param item: item to be saved.
"""
logger.info()
from platformcode.launcher import set_search_temp; set_search_temp(item)
# To disambiguate titles, TMDB is caused to ask for the really desired title
# The user can select the title among those offered on the first screen
@@ -1034,6 +1035,7 @@ def add_tvshow(item, channel=None):
"""
logger.info("show=#" + item.show + "#")
from platformcode.launcher import set_search_temp; set_search_temp(item)
if item.channel == "downloads":
itemlist = [item.clone()]

View File

@@ -8,8 +8,10 @@ PY3 = False
if sys.version_info[0] >= 3:PY3 = True; unicode = str; unichr = chr; long = int
from core.item import Item
from core import filetools
from platformcode import config, logger, platformtools
from platformcode.logger import WebErrorException
temp_search_file = config.get_temp_file('temp-search')
def start():
@@ -242,6 +244,20 @@ def run(item=None):
# Special action for searching, first asks for the words then call the "search" function
elif item.action == "search":
# from core.support import dbg;dbg()
if filetools.isfile(temp_search_file) and config.get_setting('videolibrary_kodi'):
itemlist = []
f = filetools.read(temp_search_file)
strList = f.split(',')
if strList[0] == '[V]' and strList[1] == item.channel:
for it in strList:
if it and it not in ['[V]', item.channel]:
itemlist.append(Item().fromurl(it))
filetools.write(temp_search_file, f[4:])
return platformtools.render_items(itemlist, item)
else:
filetools.remove(temp_search_file)
logger.info("item.action=%s" % item.action.upper())
from core import channeltools
@@ -250,15 +266,11 @@ def run(item=None):
else:
last_search = ''
tecleado = platformtools.dialog_input(last_search)
search_text = platformtools.dialog_input(last_search)
if tecleado is not None:
channeltools.set_channel_setting('Last_searched', tecleado, 'search')
if 'search' in dir(channel):
itemlist = channel.search(item, tecleado)
else:
from core import support
itemlist = support.search(channel, item, tecleado)
if search_text is not None:
channeltools.set_channel_setting('Last_searched', search_text, 'search')
itemlist = new_search(item.clone(text=search_text), channel)
else:
return
@@ -276,8 +288,7 @@ def run(item=None):
trakt_tools.auth_trakt()
else:
import xbmc
if not xbmc.getCondVisibility('System.HasAddon(script.trakt)') and config.get_setting(
'install_trakt'):
if not xbmc.getCondVisibility('System.HasAddon(script.trakt)') and config.get_setting('install_trakt'):
trakt_tools.ask_install_script()
itemlist = trakt_tools.trakt_check(itemlist)
else:
@@ -330,6 +341,24 @@ def run(item=None):
log_message)
def new_search(item, channel=None):
itemlist=[]
if 'search' in dir(channel):
itemlist = channel.search(item, item.text)
else:
from core import support
itemlist = support.search(channel, item, item.text)
writelist = item.channel
for it in itemlist:
writelist += ',' + it.tourl()
filetools.write(temp_search_file, writelist)
return itemlist
def set_search_temp(item):
if filetools.isfile(temp_search_file) and config.get_setting('videolibrary_kodi'):
f = '[V],' + filetools.read(temp_search_file)
filetools.write(temp_search_file, f)
def reorder_itemlist(itemlist):
logger.info()

View File

@@ -13,7 +13,12 @@ PY3 = False
if sys.version_info[0] >= 3: PY3 = True; unicode = str; unichr = chr; long = int
loggeractive = (config.get_setting("debug") == True)
try:
xbmc.KodiStub()
testMode = True
import cgi
except:
testMode = False
def log_enable(active):
global loggeractive
@@ -39,6 +44,9 @@ def encode_log(message=""):
else:
message = str(message)
if testMode:
message = cgi.escape(message).replace('\n', '<br>')
return message

View File

@@ -15,7 +15,7 @@ else:
import os, xbmc, xbmcgui, xbmcplugin
from past.utils import old_div
from channelselector import get_thumb
from core import trakt_tools, scrapertools
from core import scrapertools
from core.item import Item
from platformcode import logger, config
@@ -920,6 +920,7 @@ def set_player(item, xlistitem, mediaurl, view, strm, nfo_path=None, head_nfo=No
# Reproduce
xbmc_player.play(playlist, xlistitem)
if config.get_setting('trakt_sync'):
from core import trakt_tools
trakt_tools.wait_for_update_trakt()
elif player_mode == 1:

View File

@@ -171,7 +171,7 @@ class SettingsWindow(xbmcgui.WindowXMLDialog):
if not self.list_controls:
# If the channel path is in the "channels" folder, we get the controls and values using chaneltools
if os.path.join(config.get_runtime_path(), "channels") or os.path.join(config.get_runtime_path(), "specials") in channelpath:
if os.path.join(config.get_runtime_path(), "channels") in channelpath or os.path.join(config.get_runtime_path(), "specials") in channelpath:
# The call is made from a channel
self.list_controls, default_values = channeltools.get_channel_controls_settings(self.channel)

View File

@@ -25,7 +25,7 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
global data
post = urllib.urlencode({k: v for k, v in scrapertools.find_multiple_matches(data, "name='([^']+)' value='([^']*)'")})
time.sleep(2.1)
time.sleep(2.5)
data = httptools.downloadpage(page_url, post=post).data
videos_packed = scrapertools.find_single_match(data, r"</div>\s*<script type='text/javascript'>(eval.function.p,a,c,k,e,.*?)\s*</script>")

View File

@@ -4,7 +4,7 @@
"ignore_urls": [],
"patterns": [
{
"pattern": "https?://hdmario.live/embed/([0-9]+)",
"pattern": "https?://hdmario.live/\\w+/([0-9]+)",
"url": "https://hdmario.live/embed/\\1"
}
]
@@ -36,6 +36,22 @@
],
"type": "list",
"visible": false
},
{
"default": "",
"enabled": true,
"id": "username",
"label": "username",
"type": "text",
"visible": true
},
{
"default": "",
"enabled": true,
"id": "password",
"label": "password",
"type": "text",
"visible": true
}
]
}

View File

@@ -2,7 +2,7 @@
import xbmc
from core import httptools, scrapertools, filetools
from platformcode import logger, config
from platformcode import logger, config, platformtools
baseUrl = 'https://hdmario.live'
@@ -19,10 +19,56 @@ def test_video_exists(page_url):
return True, ""
def login():
httptools.downloadpage(page.url.replace('/unauthorized', '/login'),
post={'email': config.get_setting('username', server='hdmario'),
'password': config.get_setting('password', server='hdmario')})
def registerOrLogin(page_url, forced=False):
if not forced and config.get_setting('username', server='hdmario') and config.get_setting('password', server='hdmario'):
login()
else:
if platformtools.dialog_yesno('HDmario',
'Questo server necessita di un account, ne hai già uno oppure vuoi tentare una registrazione automatica?',
yeslabel='Accedi', nolabel='Tenta registrazione'):
from specials import setting
from core.item import Item
setting.server_config(Item(config='hdmario'))
login()
else:
logger.info('Registrazione automatica in corso')
import random
import string
randEmail = ''.join(random.choice(string.ascii_letters + string.digits) for i in range(random.randint(9, 14))) + '@gmail.com'
randPsw = ''.join(random.choice(string.ascii_letters + string.digits) for i in range(10))
logger.info('email: ' + randEmail)
logger.info('pass: ' + randPsw)
nTry = 0
while nTry < 5:
nTry += 1
rq = 'loggedin' in httptools.downloadpage(baseUrl + '/register/',
post={'email': randEmail, 'email_confirmation': randEmail,
'password': randPsw,
'password_confirmation': randPsw}).url
if rq:
config.set_setting('username', randEmail, server='hdmario')
config.set_setting('password', randPsw, server='hdmario')
platformtools.dialog_ok('HDmario',
'Registrato automaticamente con queste credenziali:\nemail:' + randEmail + '\npass: ' + randPsw)
break
else:
platformtools.dialog_ok('HDmario', 'Impossibile registrarsi automaticamente')
logger.info('Registrazione completata')
global page, data
page = httptools.downloadpage(page_url)
data = page.data
def get_video_url(page_url, premium=False, user="", password="", video_password=""):
global page, data
page_url = page_url.replace('?', '')
logger.info("url=" + page_url)
if 'unconfirmed' in page.url:
from lib import onesecmail
id = page_url.split('/')[-1]
@@ -38,6 +84,13 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
code = jsonMail['subject'].split(' - ')[0]
page = httptools.downloadpage(page_url + '?code=' + code)
data = page.data
if '/unauthorized' in page.url:
registerOrLogin(page_url)
if 'Registrati' in data:
platformtools.dialog_ok('HDmario', 'Username/password non validi')
registerOrLogin(page_url, True)
logger.info(data)
from lib import jsunpack_js2py
unpacked = jsunpack_js2py.unpack(scrapertools.find_single_match(data, '<script type="text/javascript">\n*\s*\n*(eval.*)'))

View File

@@ -6,6 +6,7 @@ from core import httptools
from core import scrapertools
from lib import js2py
from platformcode import logger, config
import re
def test_video_exists(page_url):
@@ -23,12 +24,47 @@ def get_video_url(page_url, premium=False, user="", password="", video_password=
logger.info("(page_url='%s')" % page_url)
video_urls = []
global page_data
dec = scrapertools.find_single_match(page_data, '(\$=~\[\];.*?\(\)\))\(\);')
# needed to increase recursion
import sys
sys.setrecursionlimit(10000)
video_url = scrapertools.find_single_match(decode(page_data), r"'src',\s*'([^']+)")
video_urls.append([video_url.split('.')[-1] + ' [MyStream]', video_url])
return video_urls
deObfCode = js2py.eval_js(dec)
def decode(data):
# adapted from ResolveURL code - https://github.com/jsergio123/script.module.resolveurl
video_urls.append(['mp4 [mystream]', scrapertools.find_single_match(str(deObfCode), "'src',\s*'([^']+)")])
return video_urls
first_group = scrapertools.find_single_match(data, r'"\\"("\+.*?)"\\""\)\(\)\)\(\)')
match = scrapertools.find_single_match(first_group, r"(\(!\[\]\+\"\"\)\[.+?\]\+)")
if match:
first_group = first_group.replace(match, 'l').replace('$.__+', 't').replace('$._+', 'u').replace('$._$+', 'o')
tmplist = []
js = scrapertools.find_single_match(data, r'(\$={.+?});')
if js:
js_group = js[3:][:-1]
second_group = js_group.split(',')
i = -1
for x in second_group:
a, b = x.split(':')
if b == '++$':
i += 1
tmplist.append(("$.{}+".format(a), i))
elif b == '(![]+"")[$]':
tmplist.append(("$.{}+".format(a), 'false'[i]))
elif b == '({}+"")[$]':
tmplist.append(("$.{}+".format(a), '[object Object]'[i]))
elif b == '($[$]+"")[$]':
tmplist.append(("$.{}+".format(a), 'undefined'[i]))
elif b == '(!""+"")[$]':
tmplist.append(("$.{}+".format(a), 'true'[i]))
tmplist = sorted(tmplist, key=lambda z: str(z[1]))
for x in tmplist:
first_group = first_group.replace(x[0], str(x[1]))
first_group = first_group.replace('\\"', '\\').replace("\"\\\\\\\\\"", "\\\\").replace('\\"', '\\').replace('"', '').replace("+", "")
return first_group.encode('ascii').decode('unicode-escape').encode('ascii').decode('unicode-escape')

View File

@@ -255,7 +255,7 @@ def get_seasons(item):
action='episodios',
contentSeason=option['season'],
infoLabels=infoLabels,
contentType='season',
contentType='season' if show_seasons else 'tvshow',
path=extra.path))
if inspect.stack()[2][3] in ['add_tvshow', 'get_episodes', 'update', 'find_episodes', 'get_newest'] or show_seasons == False:
@@ -265,6 +265,10 @@ def get_seasons(item):
itemlist = itlist
if inspect.stack()[2][3] not in ['add_tvshow', 'get_episodes', 'update', 'find_episodes', 'get_newest'] and defp and not item.disable_pagination:
itemlist = pagination(item, itemlist)
if show_seasons:
support.videolibrary(itemlist, item)
support.download(itemlist, item)
return itemlist
@@ -353,16 +357,17 @@ def episodios(item, json ='', key='', itemlist =[]):
itemlist = []
for season in season_list:
itemlist.append(Item(channel=item.channel,
title=set_title(config.get_localized_string(60027) % season),
fulltitle=itm.fulltitle,
show=itm.show,
thumbnails=itm.thumbnails,
url=itm.url,
action='episodios',
contentSeason=season,
infoLabels=infoLabels,
filterseason=str(season),
path=item.path))
title=set_title(config.get_localized_string(60027) % season),
fulltitle=itm.fulltitle,
show=itm.show,
thumbnails=itm.thumbnails,
url=itm.url,
action='episodios',
contentSeason=season,
contentType = 'episode',
infoLabels=infoLabels,
filterseason=str(season),
path=item.path))
elif defp and inspect.stack()[1][3] not in ['get_seasons'] and not item.disable_pagination:
if Pagination and len(itemlist) >= Pagination:
@@ -371,6 +376,9 @@ def episodios(item, json ='', key='', itemlist =[]):
item.page = pag + 1
item.thumbnail = support.thumb()
itemlist.append(item)
if not show_seasons:
support.videolibrary(itemlist, item)
support.download(itemlist, item)
return itemlist

View File

@@ -840,7 +840,7 @@ def start_download(item):
def get_episodes(item):
log("contentAction: %s | contentChannel: %s | contentType: %s" % (item.contentAction, item.contentChannel, item.contentType))
if 'dlseason' in item:
season = True
season_number = item.dlseason
@@ -864,8 +864,9 @@ def get_episodes(item):
episodes = getattr(channel, item.contentAction)(item)
itemlist = []
if episodes and not scrapertools.find_single_match(episodes[0].title, r'(\d+.\d+)') and item.channel not in ['videolibrary']:
if episodes and not scrapertools.find_single_match(episodes[0].title, r'(\d+.\d+)') and item.channel not in ['videolibrary'] and item.action != 'season':
from specials.autorenumber import select_type, renumber, check
# support.dbg()
if not check(item):
select_type(item)
return get_episodes(item)

View File

@@ -29,18 +29,15 @@
]
},
{
"id": "perfil",
"id": "order",
"type": "list",
"label": "@60666",
"label": "Ordinamento",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": [
"@60667",
"@60668",
"@60669",
"@60670",
"@60671"
"Default",
"Alfabetico"
]
}
]

View File

@@ -435,9 +435,9 @@ def get_title(item):
if item.quality:
title += support.typo(item.quality, '_ [] color kod')
season_ = support.typo(config.get_localized_string(70736), '_ [] color white bold') if (type(item.args) != bool and 'season_completed' in item.news and not item.episode) else ''
if season_:
title += season_
# season_ = support.typo(config.get_localized_string(70736), '_ [] color white bold') if (type(item.args) != bool and 'season_completed' in item.news and not item.episode) else ''
# if season_:
# title += season_
return title
@@ -453,8 +453,9 @@ def no_group(list_result_canal):
# i.text_color = color3
itemlist.append(i.clone())
return sorted(itemlist, key=lambda it: it.title.lower())
if config.get_setting('order','news') == 1:
itemlist = sorted(itemlist, key=lambda it: it.title.lower())
return itemlist
def group_by_channel(list_result_canal):
@@ -472,15 +473,10 @@ def group_by_channel(list_result_canal):
# We add the content found in the list_result list
for c in sorted(dict_canales):
itemlist.append(Item(channel="news", title=channels_id_name[c] + ':', text_color=color1, text_bold=True))
channel_params = channeltools.get_channel_parameters(c)
itemlist.append(Item(channel="news", title=support.typo(channel_params['title'],'bullet bold color kod'), thumbnail=channel_params['thumbnail']))
for i in dict_canales[c]:
## if i.contentQuality:
## i.title += ' (%s)' % i.contentQuality
## if i.contentLanguage:
## i.title += ' [%s]' % i.contentLanguage
## i.title = ' %s' % i.title
#### i.text_color = color3
itemlist.append(i.clone())
return itemlist
@@ -526,12 +522,10 @@ def group_by_content(list_result_canal):
else:
title += config.get_localized_string(70211) % (', '.join([i for i in canales_no_duplicados]))
new_item = v[0].clone(channel="news", title=title, action="show_channels",
sub_list=[i.tourl() for i in v], extra=channels_id_name)
new_item = v[0].clone(channel="news", title=title, action="show_channels", sub_list=[i.tourl() for i in v], extra=channels_id_name)
else:
new_item = v[0].clone(title=title)
## new_item.text_color = color3
list_result.append(new_item)
return sorted(list_result, key=lambda it: it.title.lower())
@@ -553,6 +547,7 @@ def show_channels(item):
## new_item.title += ' [%s]' % new_item.language
## new_item.title += ' (%s)' % channels_id_name[new_item.channel]
new_item.text_color = color1
new_item.title += typo(new_item.channel, '[]')
itemlist.append(new_item.clone())

View File

@@ -95,6 +95,10 @@ def saved_search(item):
def new_search(item):
logger.info()
temp_search_file = config.get_temp_file('temp-search')
if filetools.isfile(temp_search_file):
filetools.remove(temp_search_file)
itemlist = []
if config.get_setting('last_search'):
last_search = channeltools.get_channel_setting('Last_searched', 'search', '')
@@ -149,11 +153,11 @@ def new_search(item):
itemlist.append(new_item)
if item.mode == 'all' or not itemlist:
itemlist = channel_search(Item(channel=item.channel,
title=searched_text,
text=searched_text,
mode='all',
infoLabels={}))
return channel_search(Item(channel=item.channel,
title=searched_text,
text=searched_text,
mode='all',
infoLabels={}))
return itemlist
@@ -177,6 +181,18 @@ def channel_search(item):
item.text = item.infoLabels['title']
item.title = item.text
temp_search_file = config.get_temp_file('temp-search')
if filetools.isfile(temp_search_file):
itemlist = []
f = filetools.read(temp_search_file)
if f.startswith(item.text):
for it in f.split(','):
if it and it != item.text:
itemlist.append(Item().fromurl(it))
return itemlist
else:
filetools.remove(temp_search_file)
searched_id = item.infoLabels['tmdb_id']
channel_list, channel_titles = get_channels(item)
@@ -185,8 +201,7 @@ def channel_search(item):
searching_titles += channel_titles
cnt = 0
progress = platformtools.dialog_progress(config.get_localized_string(30993) % item.title, config.get_localized_string(70744) % len(channel_list),
', '.join(searching_titles))
progress = platformtools.dialog_progress(config.get_localized_string(30993) % item.title, config.get_localized_string(70744) % len(channel_list), ', '.join(searching_titles))
config.set_setting('tmdb_active', False)
search_action_list = []
@@ -234,14 +249,12 @@ def channel_search(item):
cnt += 1
searching_titles.remove(searching_titles[searching.index(channel)])
searching.remove(channel)
progress.update(old_div(((total_search_actions - len(search_action_list)) * 100), total_search_actions), config.get_localized_string(70744) % str(len(channel_list) - cnt),
', '.join(searching_titles))
progress.update(old_div(((total_search_actions - len(search_action_list)) * 100), total_search_actions), config.get_localized_string(70744) % str(len(channel_list) - cnt), ', '.join(searching_titles))
progress.close()
cnt = 0
progress = platformtools.dialog_progress(config.get_localized_string(30993) % item.title, config.get_localized_string(60295),
config.get_localized_string(60293))
progress = platformtools.dialog_progress(config.get_localized_string(30993) % item.title, config.get_localized_string(60295), config.get_localized_string(60293))
config.set_setting('tmdb_active', True)
# res_count = 0
@@ -330,7 +343,14 @@ def channel_search(item):
if results:
results.insert(0, Item(title=typo(config.get_localized_string(30025), 'color kod bold'), thumbnail=get_thumb('search.png')))
# logger.debug(results_statistic)
return valid + results
itlist = valid + results
writelist = item.text
for it in itlist:
writelist += ',' + it.tourl()
filetools.write(temp_search_file, writelist)
return itlist
def get_channel_results(item, module_dict, search_action):

237
tests.py
View File

@@ -1,237 +0,0 @@
# -*- coding: utf-8 -*-
import os
import sys
import unittest
import parameterized
from platformcode import config
config.set_setting('tmdb_active', False)
librerias = os.path.join(config.get_runtime_path(), 'lib')
sys.path.insert(0, librerias)
from core.support import typo
from core.item import Item
from core.httptools import downloadpage
from core import servertools
import channelselector
import re
validUrlRegex = re.compile(
r'^(?:http|ftp)s?://' # http:// or https://
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # domain...
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)$', re.IGNORECASE)
chBlackList = ['url']
chNumRis = {
'altadefinizione01': {
'Film': 20
},
'altadefinizione01_link': {
'Film': 16,
'Serie TV': 16,
},
'altadefinizioneclick': {
'Film': 36,
'Serie TV': 12,
},
'casacinema': {
'Film': 10,
'Serie TV': 10,
},
'cineblog01': {
'Film': 12,
'Serie TV': 13
},
'cinemalibero': {
'Film': 20,
'Serie TV': 20,
},
'cinetecadibologna': {
'Film': 10
},
'eurostreaming': {
'Serie TV': 18
},
'Filmpertutti': {
'Film': 24,
'Serie TV': 24,
},
'guardaSerie TVclick': {
'da controllare': 0
},
'hd4me': {
'Film': 10
},
'ilgeniodellostreaming': {
'Film': 30,
'Serie TV': 30
},
'italiaserie': {
'Serie TV': 20
},
'casacinemaInfo': {
'Film': 150
},
'netfreex': {
'Film': 30,
'Serie TV': 30
},
'piratestreaming': {
'Film': 24,
'Serie TV': 24
},
'polpotv': {
'Film': 12,
'Serie TV': 12
},
'streamingaltadefinizione': {
'Film': 30,
'Serie TV': 30
},
'seriehd': {
'Serie TV': 12
},
'serietvonline': {
'Film': 35,
'Serie TV': 35
},
'tantifilm': {
'Film': 20,
'Serie TV': 20
},
}
def getChannels():
channel_list = channelselector.filterchannels("all")
ret = []
for chItem in channel_list:
ch = chItem.channel
if ch not in chBlackList:
ret.append({'ch': ch})
return ret
from specials import news
dictNewsChannels, any_active = news.get_channels_list()
servers_found = []
def getServers():
ret = []
for srv in servers_found:
ret.append({'item': srv})
return ret
@parameterized.parameterized_class(getChannels())
class GenericChannelTest(unittest.TestCase):
def __init__(self, *args):
self.module = __import__('channels.%s' % self.ch, fromlist=["channels.%s" % self.ch])
super(GenericChannelTest, self).__init__(*args)
def test_menuitems(self):
hasChannelConfig = False
mainlist = self.module.mainlist(Item())
self.assertTrue(mainlist, 'channel ' + self.ch + ' has no menu')
for it in mainlist:
print 'testing ' + self.ch + ' -> ' + it.title
if it.action == 'channel_config':
hasChannelConfig = True
continue
if it.action == 'search': # channel-specific
continue
itemlist = getattr(self.module, it.action)(it)
self.assertTrue(itemlist, 'channel ' + self.ch + ' -> ' + it.title + ' is empty')
if self.ch in chNumRis: # i know how much results should be
for content in chNumRis[self.ch]:
if content in it.title:
risNum = len([i for i in itemlist if not i.nextPage]) # not count nextpage
self.assertEqual(chNumRis[self.ch][content], risNum,
'channel ' + self.ch + ' -> ' + it.title + ' returned wrong number of results')
break
for resIt in itemlist:
self.assertLess(len(resIt.fulltitle), 110,
'channel ' + self.ch + ' -> ' + it.title + ' might contain wrong titles\n' + resIt.fulltitle)
if resIt.url:
self.assertIsInstance(resIt.url, str, 'channel ' + self.ch + ' -> ' + it.title + ' -> ' + resIt.title + ' contain non-string url')
self.assertIsNotNone(re.match(validUrlRegex, resIt.url),
'channel ' + self.ch + ' -> ' + it.title + ' -> ' + resIt.title + ' might contain wrong url\n' + resIt.url)
if 'year' in resIt.infoLabels and resIt.infoLabels['year']:
msgYear = 'channel ' + self.ch + ' -> ' + it.title + ' might contain wrong infolabels year\n' + str(
resIt.infoLabels['year'])
self.assert_(type(resIt.infoLabels['year']) is int or resIt.infoLabels['year'].isdigit(), msgYear)
self.assert_(int(resIt.infoLabels['year']) > 1900 and int(resIt.infoLabels['year']) < 2100, msgYear)
if resIt.title == typo(config.get_localized_string(30992), 'color kod bold'): # next page
nextPageItemlist = getattr(self.module, resIt.action)(resIt)
self.assertTrue(nextPageItemlist,
'channel ' + self.ch + ' -> ' + it.title + ' has nextpage not working')
# some sites might have no link inside, but if all results are without servers, there's something wrong
servers = []
for resIt in itemlist:
if hasattr(self.module, resIt.action):
servers = getattr(self.module, resIt.action)(resIt)
else:
servers = [resIt]
if servers:
break
self.assertTrue(servers, 'channel ' + self.ch + ' -> ' + it.title + ' has no servers on all results')
for server in servers:
srv = server.server.lower()
if not srv:
continue
module = __import__('servers.%s' % srv, fromlist=["servers.%s" % srv])
page_url = server.url
print 'testing ' + page_url
self.assert_(hasattr(module, 'test_video_exists'), srv + ' has no test_video_exists')
if module.test_video_exists(page_url)[0]:
urls = module.get_video_url(page_url)
server_parameters = servertools.get_server_parameters(srv)
self.assertTrue(urls or server_parameters.get("premium"), srv + ' scraper did not return direct urls for ' + page_url)
print urls
for u in urls:
spl = u[1].split('|')
if len(spl) == 2:
directUrl, headersUrl = spl
else:
directUrl, headersUrl = spl[0], ''
headers = {}
if headersUrl:
for name in headersUrl.split('&'):
h, v = name.split('=')
h = str(h)
headers[h] = str(v)
print headers
if 'magnet:?' in directUrl: # check of magnet links not supported
continue
page = downloadpage(directUrl, headers=headers, only_headers=True, use_requests=True)
self.assertTrue(page.success, srv + ' scraper returned an invalid link')
self.assertLess(page.code, 400, srv + ' scraper returned a ' + str(page.code) + ' link')
contentType = page.headers['Content-Type']
self.assert_(contentType.startswith('video') or 'mpegurl' in contentType or 'octet-stream' in contentType or 'dash+xml' in contentType,
srv + ' scraper did not return valid url for link ' + page_url + '\nDirect url: ' + directUrl + '\nContent-Type: ' + contentType)
self.assertTrue(hasChannelConfig, 'channel ' + self.ch + ' has no channel config')
def test_newest(self):
for cat in dictNewsChannels:
for ch in dictNewsChannels[cat]:
if self.ch == ch[0]:
itemlist = self.module.newest(cat)
self.assertTrue(itemlist, 'channel ' + self.ch + ' returned no news for category ' + cat)
break
if __name__ == '__main__':
unittest.main()

0
tests/__init__.py Normal file
View File

View File

@@ -0,0 +1,113 @@
<settings version="2">
<setting id="addon_update_enabled">true</setting>
<setting id="addon_update_message">true</setting>
<setting id="addon_update_timer">1</setting>
<setting id="autoplay" default="true">false</setting>
<setting id="autostart" default="true">Off</setting>
<setting id="browser">true</setting>
<setting id="category" default="true"></setting>
<setting id="channel_language" default="true">all</setting>
<setting id="channels_check" default="true"></setting>
<setting id="channels_config" default="true"></setting>
<setting id="channels_onoff" default="true"></setting>
<setting id="checkdns">true</setting>
<setting id="checklinks" default="true">false</setting>
<setting id="checklinks_number">5</setting>
<setting id="custom_start" default="true">false</setting>
<setting id="custom_theme" default="true"></setting>
<setting id="debriders_config" default="true"></setting>
<setting id="debug">true</setting>
<setting id="default_action" default="true">0</setting>
<setting id="delete_key" default="true"></setting>
<setting id="download_adv" default="true"></setting>
<setting id="downloadenabled" default="true">false</setting>
<setting id="downloadlistpath" default="true">special://profile/addon_data/plugin.video.kod/downloads/list/</setting>
<setting id="downloadpath" default="true">special://profile/addon_data/plugin.video.kod/downloads/</setting>
<setting id="elementum_install" default="true"></setting>
<setting id="elementum_on_seed" default="true">false</setting>
<setting id="enable_channels_menu">true</setting>
<setting id="enable_custom_theme" default="true">false</setting>
<setting id="enable_fav_menu">true</setting>
<setting id="enable_library_menu">true</setting>
<setting id="enable_link_menu">true</setting>
<setting id="enable_news_menu">true</setting>
<setting id="enable_onair_menu">true</setting>
<setting id="enable_search_menu">true</setting>
<setting id="extended_info" default="true">false</setting>
<setting id="favorites_servers" default="true">false</setting>
<setting id="filter_servers" default="true">true</setting>
<setting id="folder_movies" default="true">Film</setting>
<setting id="folder_tvshows" default="true">Serie TV</setting>
<setting id="hide_servers" default="true">false</setting>
<setting id="hidepremium" default="true">false</setting>
<setting id="httptools_timeout">15</setting>
<setting id="icon_set" default="true">default</setting>
<setting id="infoplus" default="true">false</setting>
<setting id="infoplus_set" default="true">false</setting>
<setting id="install_trakt" default="true">true</setting>
<setting id="kod_menu">true</setting>
<setting id="language" default="true">ITA</setting>
<setting id="last_search">true</setting>
<setting id="library_move">true</setting>
<setting id="lista_activa" default="true">kodfavorites-default.json</setting>
<setting id="news_anime" default="true"></setting>
<setting id="news_documentaries" default="true"></setting>
<setting id="news_films" default="true"></setting>
<setting id="news_options" default="true"></setting>
<setting id="news_series" default="true"></setting>
<setting id="news_start" default="true">false</setting>
<setting id="next_ep">true</setting>
<setting id="next_ep_seconds">40</setting>
<setting id="next_ep_type" default="true">0</setting>
<setting id="only_channel_icons" default="true">false</setting>
<setting id="player_mode">0</setting>
<setting id="quality" default="true">0</setting>
<setting id="quality_priority" default="true">false</setting>
<setting id="quick_menu">true</setting>
<setting id="resolve_priority" default="true">0</setting>
<setting id="resolve_stop">true</setting>
<setting id="resolver_dns">true</setting>
<setting id="result_mode" default="true">0</setting>
<setting id="saved_searches_limit">10</setting>
<setting id="search_channels" default="true"></setting>
<setting id="server_speed">true</setting>
<setting id="servers_blacklist" default="true"></setting>
<setting id="servers_config" default="true"></setting>
<setting id="servers_favorites" default="true"></setting>
<setting id="settings_path" default="true">special://profile/addon_data/plugin.video.kod/settings_channels</setting>
<setting id="shortcut_key" default="true"></setting>
<setting id="show_once" default="true">true</setting>
<setting id="side_menu" default="true">false</setting>
<setting id="skin_name" default="true">skin.estuary</setting>
<setting id="start_page" default="true">false</setting>
<setting id="subtitle_name" default="true">[B]4[B] - [B]Tamako Market[B]</setting>
<setting id="thread_number" default="true">0</setting>
<setting id="tmdb_active" default="true">true</setting>
<setting id="tmdb_cache">true</setting>
<setting id="tmdb_cache_expire">4</setting>
<setting id="tmdb_clean_db_cache" default="true"></setting>
<setting id="tmdb_plus_info" default="true">false</setting>
<setting id="tmdb_threads">20</setting>
<setting id="touch_view" default="true">false</setting>
<setting id="trakt_sync" default="true">false</setting>
<setting id="tvdb_token" default="true">eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTQwMjkyNzYsImlkIjoicGVsaXNhbGFjYXJ0YSIsIm9yaWdfaWF0IjoxNTkzNDI0NDMyfQ.Op5vL2VNz_YJvnpX0fgz_3FVuOG9c1MT084jYdf-EcBuotgMoQ6-6TtKAAb_x8C-_AMEv1Io5DUpbjxIViSZEaI0wypfkWJEuSIHfTHqOaD80Pdf0hcK9pliSUTPQMPyHaacP-VaVubydMzq-n-5AORpYXuFE1w60KWuTZT4MbPA4F9SAENB4_q3VL-1heqnyrx7iEA5smQVO-LGCKMEgy9rnnCdtUWUP19WXqiaxCgng7Nmx8kICa5zV9I6ov_2XYzikJDF7txEoVdVVotQ1THUkAQfCmQOGb0pPIbFOpjXayNiPzmwj_yCLCakdO82cw0M7cJ2fOERfFKLmVn39w</setting>
<setting id="tvdb_token_date" default="true">2020-06-29</setting>
<setting id="video_thumbnail_type">1</setting>
<setting id="videolibrary_kodi">true</setting>
<setting id="videolibrary_max_quality" default="true">false</setting>
<setting id="videolibrarypath" default="true">special://profile/addon_data/plugin.video.kod/videolibrary/</setting>
<setting id="vidolibrary_delete" default="true"></setting>
<setting id="vidolibrary_export" default="true"></setting>
<setting id="vidolibrary_import" default="true"></setting>
<setting id="vidolibrary_preferences" default="true"></setting>
<setting id="vidolibrary_restore" default="true"></setting>
<setting id="vidolibrary_update" default="true"></setting>
<setting id="view_mode_addon">Muro di Icone, 52</setting>
<setting id="view_mode_channel">Lista Larga, 55</setting>
<setting id="view_mode_episode">Lista, 50</setting>
<setting id="view_mode_movie">Lista, 50</setting>
<setting id="view_mode_season">Predefinito , 0</setting>
<setting id="view_mode_server">Lista Larga, 55</setting>
<setting id="view_mode_tvshow">Lista, 50</setting>
<setting id="watched_setting">80</setting>
</settings>

297
tests/test_generic.py Normal file
View File

@@ -0,0 +1,297 @@
# -*- coding: utf-8 -*-
# use export PYTHONPATH=addon source code
# and inside .kodi to run tests locally
# you can pass specific channel name using KOD_TST_CH environment var
# export PYTHONPATH=/home/user/.kodi/addons/plugin.video.kod
# export KOD_TST_CH=channel
# python tests/test_generic.py
import os
import sys
import unittest
import xbmc
if 'KOD_TST_CH' not in os.environ:
# custom paths
def add_on_info(*args, **kwargs):
return xbmc.AddonData(
kodi_home_path=os.path.join(os.getcwd(), 'tests', 'home'),
add_on_id='plugin.video.kod',
add_on_path=os.getcwd(),
kodi_profile_path=os.path.join(os.getcwd(), 'tests', 'home', 'userdata')
)
# override
xbmc.get_add_on_info_from_calling_script = add_on_info
import HtmlTestRunner
import parameterized
from platformcode import config, logger
config.set_setting('tmdb_active', False)
librerias = os.path.join(config.get_runtime_path(), 'lib')
sys.path.insert(0, librerias)
from core.support import typo
from core.item import Item
from core.httptools import downloadpage
from core import servertools
import channelselector
import re
validUrlRegex = re.compile(
r'^(?:http|ftp)s?://' # http:// or https://
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' # domain...
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)$', re.IGNORECASE)
chBlackList = ['url']
chNumRis = {
'altadefinizione01': {
'Film': 20
},
'altadefinizione01_link': {
'Film': 16,
'Serie TV': 16,
},
'altadefinizioneclick': {
'Film': 36,
'Serie TV': 12,
},
'casacinema': {
'Film': 10,
'Serie TV': 10,
},
'cineblog01': {
'Film': 12,
'Serie TV': 13
},
'cinemalibero': {
'Film': 20,
'Serie TV': 20,
},
'cinetecadibologna': {
'Film': 10
},
'eurostreaming': {
'Serie TV': 18
},
'Filmpertutti': {
'Film': 24,
'Serie TV': 24,
},
'hd4me': {
'Film': 10
},
'ilgeniodellostreaming': {
'Film': 30,
'Serie TV': 30
},
'italiaserie': {
'Serie TV': 20
},
'casacinemaInfo': {
'Film': 150
},
'netfreex': {
'Film': 30,
'Serie TV': 30
},
'piratestreaming': {
'Film': 24,
'Serie TV': 24
},
'polpotv': {
'Film': 12,
'Serie TV': 12
},
'streamingaltadefinizione': {
'Film': 30,
'Serie TV': 30
},
'seriehd': {
'Serie TV': 12
},
'serietvonline': {
'Film': 35,
'Serie TV': 35
},
'tantifilm': {
'Film': 20,
'Serie TV': 20
},
}
servers = []
channels = []
channel_list = channelselector.filterchannels("all") if 'KOD_TST_CH' not in os.environ else [Item(channel=os.environ['KOD_TST_CH'], action="mainlist")]
ret = []
for chItem in channel_list:
try:
ch = chItem.channel
if ch not in chBlackList:
module = __import__('channels.%s' % ch, fromlist=["channels.%s" % ch])
hasChannelConfig = False
mainlist = module.mainlist(Item())
menuItemlist = {}
serversFound = {}
for it in mainlist:
print 'preparing ' + ch + ' -> ' + it.title
if it.action == 'channel_config':
hasChannelConfig = True
continue
if it.action == 'search': # channel-specific
continue
itemlist = getattr(module, it.action)(it)
menuItemlist[it.title] = itemlist
# some sites might have no link inside, but if all results are without servers, there's something wrong
for resIt in itemlist:
if resIt.action == 'findvideos':
if hasattr(module, resIt.action):
serversFound[it.title] = getattr(module, resIt.action)(resIt)
else:
serversFound[it.title] = [resIt]
if serversFound[it.title]:
servers.extend(
{'name': srv.server.lower(), 'server': srv} for srv in serversFound[it.title] if srv.server)
break
channels.append(
{'ch': ch, 'hasChannelConfig': hasChannelConfig, 'mainlist': mainlist, 'menuItemlist': menuItemlist,
'serversFound': serversFound, 'module': module})
except:
import traceback
logger.error(traceback.format_exc())
from specials import news
dictNewsChannels, any_active = news.get_channels_list()
print channels
# only 1 server item for single server
serverNames = []
serversFinal = []
for s in servers:
if not s['name'] in serverNames:
serverNames.append(s['name'])
serversFinal.append(s)
@parameterized.parameterized_class(channels)
class GenericChannelTest(unittest.TestCase):
def test_mainlist(self):
self.assertTrue(self.mainlist, 'channel ' + self.ch + ' has no mainlist')
self.assertTrue(self.hasChannelConfig, 'channel ' + self.ch + ' has no channel config')
def test_newest(self):
for cat in dictNewsChannels:
for ch in dictNewsChannels[cat]:
if self.ch == ch[0]:
itemlist = self.module.newest(cat)
self.assertTrue(itemlist, 'channel ' + self.ch + ' returned no news for category ' + cat)
break
@parameterized.parameterized_class(
[{'ch': ch['ch'], 'title': title, 'itemlist': itemlist, 'serversFound': ch['serversFound'][title] if title in ch['serversFound'] else True, 'module': ch['module']} for ch in channels for
title, itemlist in ch['menuItemlist'].items()])
class GenericChannelMenuItemTest(unittest.TestCase):
def test_menu(self):
print 'testing ' + self.ch + ' --> ' + self.title
self.assertTrue(self.module.host, 'channel ' + self.ch + ' has not a valid hostname')
self.assertTrue(self.itemlist, 'channel ' + self.ch + ' -> ' + self.title + ' is empty')
self.assertTrue(self.serversFound,
'channel ' + self.ch + ' -> ' + self.title + ' has no servers on all results')
if self.ch in chNumRis: # i know how much results should be
for content in chNumRis[self.ch]:
if content in self.title:
risNum = len([i for i in self.itemlist if not i.nextPage]) # not count nextpage
self.assertEqual(chNumRis[self.ch][content], risNum,
'channel ' + self.ch + ' -> ' + self.title + ' returned wrong number of results<br>'
+ str(risNum) + ' but should be ' + str(chNumRis[self.ch][content]) + '<br>' +
'<br>'.join([i.title for i in self.itemlist if not i.nextPage]))
break
for resIt in self.itemlist:
print resIt.title + ' -> ' + resIt.url
self.assertLess(len(resIt.fulltitle), 110,
'channel ' + self.ch + ' -> ' + self.title + ' might contain wrong titles<br>' + resIt.fulltitle)
if resIt.url:
self.assertIsInstance(resIt.url, str,
'channel ' + self.ch + ' -> ' + self.title + ' -> ' + resIt.title + ' contain non-string url')
self.assertIsNotNone(re.match(validUrlRegex, resIt.url),
'channel ' + self.ch + ' -> ' + self.title + ' -> ' + resIt.title + ' might contain wrong url<br>' + resIt.url)
if 'year' in resIt.infoLabels and resIt.infoLabels['year']:
msgYear = 'channel ' + self.ch + ' -> ' + self.title + ' might contain wrong infolabels year<br>' + str(
resIt.infoLabels['year'])
self.assert_(type(resIt.infoLabels['year']) is int or resIt.infoLabels['year'].isdigit(),
msgYear)
self.assert_(int(resIt.infoLabels['year']) > 1900 and int(resIt.infoLabels['year']) < 2100,
msgYear)
if resIt.title == typo(config.get_localized_string(30992), 'color kod bold'): # next page
nextPageItemlist = getattr(self.module, resIt.action)(resIt)
self.assertTrue(nextPageItemlist,
'channel ' + self.ch + ' -> ' + self.title + ' has nextpage not working')
print '<br>test passed'
@parameterized.parameterized_class(serversFinal)
class GenericServerTest(unittest.TestCase):
def test_get_video_url(self):
module = __import__('servers.%s' % self.name, fromlist=["servers.%s" % self.name])
page_url = self.server.url
print 'testing ' + page_url
self.assert_(hasattr(module, 'test_video_exists'), self.name + ' has no test_video_exists')
try:
if module.test_video_exists(page_url)[0]:
urls = module.get_video_url(page_url)
server_parameters = servertools.get_server_parameters(self.name)
self.assertTrue(urls or server_parameters.get("premium"),
self.name + ' scraper did not return direct urls for ' + page_url)
print urls
for u in urls:
spl = u[1].split('|')
if len(spl) == 2:
directUrl, headersUrl = spl
else:
directUrl, headersUrl = spl[0], ''
headers = {}
if headersUrl:
for name in headersUrl.split('&'):
h, v = name.split('=')
h = str(h)
headers[h] = str(v)
print headers
if 'magnet:?' in directUrl: # check of magnet links not supported
continue
page = downloadpage(directUrl, headers=headers, only_headers=True, use_requests=True)
self.assertTrue(page.success, self.name + ' scraper returned an invalid link')
self.assertLess(page.code, 400, self.name + ' scraper returned a ' + str(page.code) + ' link')
contentType = page.headers['Content-Type']
self.assert_(contentType.startswith(
'video') or 'mpegurl' in contentType or 'octet-stream' in contentType or 'dash+xml' in contentType,
self.name + ' scraper did not return valid url for link ' + page_url + '<br>Direct url: ' + directUrl + '<br>Content-Type: ' + contentType)
except:
import traceback
logger.error(traceback.format_exc())
if __name__ == '__main__':
if 'KOD_TST_CH' not in os.environ:
unittest.main(testRunner=HtmlTestRunner.HTMLTestRunner(report_name='report', add_timestamp=False, combine_reports=True,
report_title='KoD Test Suite'), exit=False)
else:
unittest.main()