This commit is contained in:
Alfa
2017-07-28 17:14:31 -04:00
commit 60e4685ce8
1039 changed files with 162852 additions and 0 deletions

4
.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
*.pyo
*.pyc
.DS_Store
.idea/

674
LICENSE Executable file
View File

@@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

13
README.md Executable file
View File

@@ -0,0 +1,13 @@
# Alfa addon
### Es un proyecto sin ánimo de lucro y con fines educativos, basado en el código open source de pelisalacarta-ce, que permite utilizar Kodi u otra plataforma compatible como "navegador" para ver vídeos alojados en paginas web.
### Este addon es totalmente gratuito por lo tanto esta prohibida su venta en solitario o como parte de software integrado en cualquier dispositivo, es absolutamente para uso educativo y personal.
## Colaborar
Si deseas colaborar con el proyecto eres bienvenido a hacerlo, pero ten en cuenta por favor estas pautas
- Para colaborar con sus Pull Request se pide seguir unas reglas de estilo basadas en las de python PEP8. Esto se puede
hacer desde diferentes IDEs que lo traen integrado como pydev, pycharm o ninjaIDE, o extensiones que se pueden
agregar a por ejemplo sublime, atom.
- Utilizar nombre de clases, métodos y variables en inglés para favorecer la comprensión de personas no hispano parlantes.

1
kodi/__init__.py Executable file
View File

@@ -0,0 +1 @@
# -*- coding: utf-8 -*-

27
kodi/addon.xml Executable file
View File

@@ -0,0 +1,27 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<addon id="plugin.video.alfa"
name="Alfa"
version="0.0.1"
provider-name="unknown">
<requires>
<import addon="xbmc.python" version="2.1.0"/>
<import addon="script.module.libtorrent" optional="true"/>
</requires>
<extension point="xbmc.python.pluginsource" library="default.py">
<provides>video</provides>
</extension>
<extension point="xbmc.addon.metadata">
<summary lang="es">Sumario en Español</summary>
<description lang="es">Descripción en Español</description>
<summary lang="en">English summary</summary>
<description lang="en">English description</description>
<platform>all</platform>
<license>GNU GPL v3</license>
<forum>foro</forum>
<website>web</website>
<email>my@email.com</email>
<source>source</source>
</extension>
<extension point="xbmc.service" library="videolibrary_service.py" start="login|startup">
</extension>
</addon>

12
kodi/channels/__init__.py Executable file
View File

@@ -0,0 +1,12 @@
# -*- coding: utf-8 -*-
import os
import sys
# Appends the main plugin dir to the PYTHONPATH if an internal package cannot be imported.
# Examples: In Plex Media Server all modules are under "Code.*" package, and in Enigma2 under "Plugins.Extensions.*"
try:
# from core import logger
import core
except:
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")))

46
kodi/channels/allcalidad.json Executable file
View File

@@ -0,0 +1,46 @@
{
"id": "allcalidad",
"name": "Allcalidad",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "https://s22.postimg.org/irnlwuizh/allcalidad1.png",
"bannermenu": "https://s22.postimg.org/9y1athlep/allcalidad2.png",
"version": 1,
"changes": [
{
"date": "14/07/2017",
"description": "Primera version"
}
],
"categories": [
"movie",
"latino"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": "true",
"enabled": "true",
"visible": "true"
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Peliculas",
"default": "true",
"enabled": "true",
"visible": "true"
},
{
"id": "include_in_newest_infantiles",
"type": "bool",
"label": "Incluir en Novedades - Infantiles",
"default": "true",
"enabled": "true",
"visible": "true"
}
]
}

149
kodi/channels/allcalidad.py Executable file
View File

@@ -0,0 +1,149 @@
# -*- coding: utf-8 -*-
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
host = "http://allcalidad.com/"
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel = item.channel, title = "Novedades", action = "peliculas", url = host))
itemlist.append(Item(channel = item.channel, title = "Por género", action = "generos_years", url = host, extra = "Genero" ))
itemlist.append(Item(channel = item.channel, title = "Por año", action = "generos_years", url = host, extra = ">Año<"))
itemlist.append(Item(channel = item.channel, title = "Favoritas", action = "peliculas", url = host + "/favorites" ))
itemlist.append(Item(channel = item.channel, title = ""))
itemlist.append(Item(channel = item.channel, title = "Buscar", action = "search", url = host + "?s="))
return itemlist
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == 'peliculas':
item.url = host
elif categoria == 'infantiles':
item.url = host + 'category/animacion/'
itemlist = peliculas(item)
if "Pagina" in itemlist[-1].title:
itemlist.pop()
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ","+")
item.url = item.url + texto
item.extra = "busca"
if texto!='':
return peliculas(item)
else:
return []
def generos_years(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '(?s)%s(.*?)</ul></div>' %item.extra
bloque = scrapertools.find_single_match(data, patron)
patron = 'href="([^"]+)'
patron += '">([^<]+)'
matches = scrapertools.find_multiple_matches(bloque, patron)
for url, titulo in matches:
itemlist.append(Item(channel = item.channel,
action = "peliculas",
title = titulo,
url = url
))
return itemlist
def peliculas(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '(?s)short_overlay.*?<a href="([^"]+)'
patron += '.*?img.*?src="([^"]+)'
patron += '.*?title="([^"]+)'
patron += '.*?kinopoisk">([^<]+)'
patron += '</span(.*?)small_rating'
patron = '(?s)short_overlay.*?<a href="([^"]+)'
patron += '.*?img.*?src="([^"]+)'
patron += '.*?title="(.*?)"'
patron += '.*?(Idioma.*?)post-ratings'
matches = scrapertools.find_multiple_matches(data, patron)
for url, thumbnail, titulo, varios in matches:
idioma = scrapertools.find_single_match(varios, '(?s)Idioma.*?kinopoisk">([^<]+)')
year = scrapertools.find_single_match(varios, 'Año.*?kinopoisk">([^<]+)')
year = scrapertools.find_single_match(year, '[0-9]{4}')
mtitulo = titulo + " (" + idioma + ") (" + year + ")"
new_item = Item(channel = item.channel,
action = "findvideos",
title = mtitulo,
fulltitle = titulo,
thumbnail = thumbnail,
url = url,
contentTitle = titulo,
contentType="movie"
)
if year:
new_item.infoLabels['year'] = int(year)
itemlist.append(new_item)
url_pagina = scrapertools.find_single_match(data, 'next" href="([^"]+)')
if url_pagina != "":
pagina = "Pagina: " + scrapertools.find_single_match(url_pagina, "page/([0-9]+)")
itemlist.append(Item(channel = item.channel, action = "peliculas", title = pagina, url = url_pagina))
return itemlist
def findvideos(item):
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '(?s)fmi(.*?)thead'
bloque = scrapertools.find_single_match(data, patron)
match = scrapertools.find_multiple_matches(bloque, '(?is)iframe .*?src="([^"]+)')
for url in match:
server = servertools.get_server_from_url(url)
titulo = "Ver en: " + server
if "youtube" in server:
if "embed" in url:
url = "http://www.youtube.com/watch?v=" + scrapertools.find_single_match(url, 'embed/(.*)')
titulo = "[COLOR = yellow]Ver trailer: " + server + "[/COLOR]"
elif "directo" in server:
continue
itemlist.append(
Item(channel = item.channel,
action = "play",
title = titulo,
fulltitle = item.fulltitle,
thumbnail = item.thumbnail,
server = server,
url = url
))
if itemlist:
itemlist.append(Item(channel = item.channel))
itemlist.append(item.clone(channel="trailertools", title="Buscar Tráiler", action="buscartrailer", context="",
text_color="magenta"))
# Opción "Añadir esta película a la biblioteca de XBMC"
if item.extra != "library":
if config.get_library_support():
itemlist.append(Item(channel=item.channel, title="Añadir a la biblioteca", text_color="green",
filtro=True, action="add_pelicula_to_library", url=item.url, thumbnail = item.thumbnail,
infoLabels={'title': item.fulltitle}, fulltitle=item.fulltitle,
extra="library"))
return itemlist

77
kodi/channels/allpeliculas.json Executable file
View File

@@ -0,0 +1,77 @@
{
"id": "allpeliculas",
"name": "Allpeliculas",
"language": "es",
"active": true,
"adult": false,
"version": 1,
"changes": [
{
"date": "24/06/2017",
"description": "Url mal escritas"
},
{
"date": "10/06/2017",
"description": "Reparado búsqueda de videos"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "16/02/2017",
"description": "Añadidas nuevas opciones y servidores"
},
{
"date": "19/03/2016",
"description": "Añadido soporte para la videoteca y reparada busqueda global."
}
],
"thumbnail": "http://i.imgur.com/aWCDWtn.png",
"banner": "allpeliculas.png",
"categories": [
"movie",
"latino",
"vos",
"tvshow"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Películas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "modo_grafico",
"type": "bool",
"label": "Buscar información extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": [
"Perfil 3",
"Perfil 2",
"Perfil 1"
]
}
]
}

485
kodi/channels/allpeliculas.py Executable file
View File

@@ -0,0 +1,485 @@
# -*- coding: utf-8 -*-
import string
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
__modo_grafico__ = config.get_setting('modo_grafico', "allpeliculas")
__perfil__ = int(config.get_setting('perfil', "allpeliculas"))
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE']]
color1, color2, color3 = perfil[__perfil__]
IDIOMAS = {"Castellano": "CAST", "Latino": "LAT", "Subtitulado": "VOSE", "Ingles": "VO"}
SERVERS = {"26": "powvideo", "45": "okru", "75": "openload", "12": "netutv", "65": "thevideos",
"67": "spruto", "71": "stormo", "73": "idowatch", "48": "okru", "55": "openload",
"20": "nowvideo", "84": "fastplay", "96": "raptu", "94": "tusfiles"}
def mainlist(item):
logger.info()
itemlist = []
item.text_color = color1
itemlist.append(item.clone(title="Películas", action="lista", fanart="http://i.imgur.com/c3HS8kj.png",
url="http://allpeliculas.co/Movies/fullView/1/0/&ajax=1"))
itemlist.append(item.clone(title="Series", action="lista", fanart="http://i.imgur.com/9loVksV.png", extra="tv",
url="http://allpeliculas.co/Movies/fullView/1/86/?ajax=1&withoutFilter=1", ))
itemlist.append(item.clone(title="Géneros", action="subindice", fanart="http://i.imgur.com/ymazCWq.jpg"))
itemlist.append(item.clone(title="Índices", action="indices", fanart="http://i.imgur.com/c3HS8kj.png"))
itemlist.append(item.clone(title="", action=""))
itemlist.append(item.clone(title="Buscar...", action="search"))
itemlist.append(item.clone(action="configuracion", title="Configurar canal...", text_color="gold", folder=False))
return itemlist
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
platformtools.itemlist_refresh()
return ret
def search(item, texto):
logger.info()
if texto != "":
texto = texto.replace(" ", "+")
item.url = "http://allpeliculas.co/Search/advancedSearch?searchType=movie&movieName=" + texto + "&ajax=1"
try:
return busqueda(item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == "peliculas":
item.url = "http://allpeliculas.co/Movies/fullView/1/0/&ajax=1"
item.action = "lista"
itemlist = lista(item)
if itemlist[-1].action == "lista":
itemlist.pop()
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def busqueda(item):
logger.info()
itemlist = []
item.infoLabels = {}
item.text_color = color2
data = httptools.downloadpage(item.url).data
data = data.replace("\n", "").replace("\t", "")
data = scrapertools.decodeHtmlentities(data)
patron = '<img class="poster" src="([^"]+)".*?<div class="vote-div-count".*?>(.*?)/.*?' \
'<a class="movie-list-link" href="([^"]+)" title="([^"]+)".*?' \
'Year:</b> (.*?) </p>.*?Género:</b> (.*?)</p>'
matches = scrapertools.find_multiple_matches(data, patron)
for thumbnail, vote, url, title, year, genre in matches:
url = "http://allpeliculas.co" + url.replace("#", "") + "&ajax=1"
thumbnail = thumbnail.replace("/105/", "/400/").replace("/141/", "/600/").replace(" ", "%20")
titulo = title + " (" + year + ")"
item.infoLabels['year'] = year
item.infoLabels['genre'] = genre
item.infoLabels['rating'] = vote
if "Series" not in genre:
itemlist.append(item.clone(action="findvideos", title=titulo, fulltitle=title, url=url, thumbnail=thumbnail,
context=["buscar_trailer"], contentTitle=title, contentType="movie"))
else:
itemlist.append(item.clone(action="temporadas", title=titulo, fulltitle=title, url=url, thumbnail=thumbnail,
context=["buscar_trailer"], contentTitle=title, contentType="tvshow"))
# Paginacion
next_page = scrapertools.find_single_match(data, 'class="pagination-active".*?href="([^"]+)"')
if next_page != "":
url = next_page.replace("#", "") + "&ajax=1"
itemlist.append(item.clone(action="lista", title=">> Siguiente", url=url, text_color=color3))
return itemlist
def indices(item):
logger.info()
itemlist = []
item.text_color = color1
itemlist.append(item.clone(title="Alfabético", action="subindice"))
itemlist.append(item.clone(title="Por idioma", action="subindice"))
itemlist.append(item.clone(title="Por valoración", action="lista",
url="http://allpeliculas.co/Movies/fullView/1/0/rating:imdb|date:1900-3000|"
"alphabet:all|?ajax=1&withoutFilter=1"))
itemlist.append(item.clone(title="Por año", action="subindice"))
itemlist.append(item.clone(title="Por calidad", action="subindice"))
return itemlist
def lista(item):
logger.info()
itemlist = []
item.infoLabels = {}
item.text_color = color2
data = httptools.downloadpage(item.url).data
data = data.replace("\n", "").replace("\t", "")
data = scrapertools.decodeHtmlentities(data)
bloque = scrapertools.find_single_match(data, '<div class="movies-block-main"(.*?)<div class="movies-'
'long-pagination"')
patron = '<div class="thumb"><img src="([^"]+)".*?<a href="([^"]+)".*?' \
'(?:class="n-movie-trailer">([^<]+)<\/span>|<div class="imdb-votes">)' \
'.*?<div class="imdb"><span>(.*?)</span>.*?<span>Year.*?">(.*?)</a>.*?<span>' \
'(?:Género|Genre).*?<span>(.*?)</span>.*?<span>Language.*?<span>(.*?)</span>.*?' \
'<div class="info-full-text".*?>(.*?)<.*?<div class="views">(.*?)<.*?' \
'<div class="movie-block-title".*?>(.*?)<'
if bloque == "":
bloque = data[:]
matches = scrapertools.find_multiple_matches(bloque, patron)
for thumbnail, url, trailer, vote, year, genre, idioma, sinopsis, calidad, title in matches:
url = url.replace("#", "") + "&ajax=1"
thumbnail = thumbnail.replace("/157/", "/400/").replace("/236/", "/600/").replace(" ", "%20")
idioma = idioma.replace(" ", "").split(",")
idioma.sort()
titleidioma = "[" + "/".join(idioma) + "]"
titulo = title + " " + titleidioma + " [" + calidad + "]"
item.infoLabels['plot'] = sinopsis
item.infoLabels['year'] = year
item.infoLabels['genre'] = genre
item.infoLabels['rating'] = vote
item.infoLabels['trailer'] = trailer.replace("youtu.be/", "http://www.youtube.com/watch?v=")
if item.extra != "tv" or "Series" not in genre:
itemlist.append(item.clone(action="findvideos", title=titulo, fulltitle=title, url=url, thumbnail=thumbnail,
context=["buscar_trailer"], contentTitle=title, contentType="movie"))
else:
itemlist.append(item.clone(action="temporadas", title=titulo, fulltitle=title, url=url, thumbnail=thumbnail,
context=["buscar_trailer"], contentTitle=title, show=title,
contentType="tvshow"))
try:
from core import tmdb
# Obtenemos los datos basicos de todas las peliculas mediante multihilos
tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__)
except:
pass
# Paginacion
next_page = scrapertools.find_single_match(data, 'class="pagination-active".*?href="([^"]+)"')
if next_page != "":
url = next_page.replace("#", "") + "&ajax=1"
itemlist.append(item.clone(action="lista", title=">> Siguiente", url=url, text_color=color3))
return itemlist
def subindice(item):
logger.info()
itemlist = []
url_base = "http://allpeliculas.co/Movies/fullView/1/0/date:1900-3000|alphabet:all|?ajax=1&withoutFilter=1"
indice_genero, indice_alfa, indice_idioma, indice_year, indice_calidad = dict_indices()
if "Géneros" in item.title:
for key, value in indice_genero.items():
url = url_base.replace("/0/", "/" + key + "/")
itemlist.append(item.clone(action="lista", title=value, url=url))
itemlist.sort(key=lambda item: item.title)
elif "Alfabético" in item.title:
for i in range(len(indice_alfa)):
url = url_base.replace(":all", ":" + indice_alfa[i])
itemlist.append(item.clone(action="lista", title=indice_alfa[i], url=url))
elif "Por idioma" in item.title:
for key, value in indice_idioma.items():
url = url_base.replace("3000|", "3000|language:" + key)
itemlist.append(item.clone(action="lista", title=value, url=url))
itemlist.sort(key=lambda item: item.title)
elif "Por año" in item.title:
for i in range(len(indice_year)):
year = indice_year[i]
url = url_base.replace("1900-3000", year + "-" + year)
itemlist.append(item.clone(action="lista", title=year, url=url))
elif "Por calidad" in item.title:
for key, value in indice_calidad.items():
url = "http://allpeliculas.co/Search/advancedSearch?searchType=movie&movieName=&movieDirector=&movieGenre" \
"=&movieActor=&movieYear=&language=&movieTypeId=" + key + "&ajax=1"
itemlist.append(item.clone(action="busqueda", title=value, url=url))
itemlist.sort(key=lambda item: item.title)
return itemlist
def findvideos(item):
logger.info()
itemlist = []
item.text_color = color3
# Rellena diccionarios idioma y calidad
idiomas_videos, calidad_videos = dict_videos()
data = httptools.downloadpage(item.url).data
data = data.replace("\n", "").replace("\t", "")
data = scrapertools.decodeHtmlentities(data)
if item.extra != "library":
try:
from core import tmdb
tmdb.set_infoLabels(item, __modo_grafico__)
except:
pass
# Enlaces Online
patron = '<span class="movie-online-list" id_movies_types="([^"]+)" id_movies_servers="([^"]+)".*?id_lang=' \
'"([^"]+)".*?online-link="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for calidad, servidor_num, language, url in matches:
if servidor_num == '94' and not 'stormo.tv' in url:
url = "http://tusfiles.org/?%s" % url
if 'vimeo' in url:
url += "|" + item.url
if "filescdn" in url and url.endswith("htm"):
url += "l"
idioma = IDIOMAS.get(idiomas_videos.get(language))
titulo = "%s [" + idioma + "] [" + calidad_videos.get(calidad) + "]"
itemlist.append(item.clone(action="play", title=titulo, url=url, extra=idioma))
# Enlace Descarga
patron = '<span class="movie-downloadlink-list" id_movies_types="([^"]+)" id_movies_servers="([^"]+)".*?id_lang=' \
'"([^"]+)".*?online-link="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for calidad, servidor_num, language, url in matches:
idioma = IDIOMAS.get(idiomas_videos.get(language))
titulo = "[%s] [" + idioma + "] [" + calidad_videos.get(calidad) + "]"
itemlist.append(item.clone(action="play", title=titulo, url=url, extra=idioma))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
itemlist.sort(key=lambda item: (item.extra, item.server))
if itemlist:
if not "trailer" in item.infoLabels:
trailer_url = scrapertools.find_single_match(data, 'class="n-movie-trailer">([^<]+)</span>')
item.infoLabels['trailer'] = trailer_url.replace("youtu.be/", "http://www.youtube.com/watch?v=")
itemlist.append(item.clone(channel="trailertools", action="buscartrailer", title="Buscar Tráiler",
text_color="magenta", context=""))
if item.extra != "library":
if config.get_videolibrary_support():
itemlist.append(Item(channel=item.channel, title="Añadir película a la videoteca",
action="add_pelicula_to_library", url=item.url, text_color="green",
infoLabels={'title': item.fulltitle}, fulltitle=item.fulltitle,
extra="library"))
return itemlist
def temporadas(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
try:
from core import tmdb
tmdb.set_infoLabels_item(item, __modo_grafico__)
except:
pass
matches = scrapertools.find_multiple_matches(data, '<a class="movie-season" data-id="([^"]+)"')
matches = list(set(matches))
for season in matches:
item.infoLabels['season'] = season
itemlist.append(item.clone(action="episodios", title="Temporada " + season, context=["buscar_trailer"],
contentType="season"))
itemlist.sort(key=lambda item: item.title)
try:
from core import tmdb
tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__)
except:
pass
if not "trailer" in item.infoLabels:
trailer_url = scrapertools.find_single_match(data, 'class="n-movie-trailer">([^<]+)</span>')
item.infoLabels['trailer'] = trailer_url.replace("youtu.be/", "http://www.youtube.com/watch?v=")
itemlist.append(item.clone(channel="trailertools", action="buscartrailer", title="Buscar Tráiler",
text_color="magenta", context=""))
return itemlist
def episodios(item):
logger.info()
itemlist = []
# Rellena diccionarios idioma y calidad
idiomas_videos, calidad_videos = dict_videos()
data = httptools.downloadpage(item.url).data
data = data.replace("\n", "").replace("\t", "")
data = scrapertools.decodeHtmlentities(data)
patron = '<li><a class="movie-episode"[^>]+season="' + str(item.infoLabels['season']) + '"[^>]+>([^<]+)</a></li>'
matches = scrapertools.find_multiple_matches(data, patron)
capitulos = []
for title in matches:
if not title in capitulos:
episode = int(title.split(" ")[1])
capitulos.append(title)
itemlist.append(
item.clone(action="findvideostv", title=title, contentEpisodeNumber=episode, contentType="episode"))
itemlist.sort(key=lambda item: item.contentEpisodeNumber)
try:
from core import tmdb
tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__)
except:
pass
for item in itemlist:
if item.infoLabels["episodio_titulo"]:
item.title = "%dx%02d: %s" % (
item.contentSeason, item.contentEpisodeNumber, item.infoLabels["episodio_titulo"])
else:
item.title = "%dx%02d: %s" % (item.contentSeason, item.contentEpisodeNumber, item.title)
return itemlist
def findvideostv(item):
logger.info()
itemlist = []
# Rellena diccionarios idioma y calidad
idiomas_videos, calidad_videos = dict_videos()
data = httptools.downloadpage(item.url).data
data = data.replace("\n", "").replace("\t", "")
data = scrapertools.decodeHtmlentities(data)
patron = '<span class="movie-online-list" id_movies_types="([^"]+)" id_movies_servers="([^"]+)".*?episode="%s' \
'" season="%s" id_lang="([^"]+)".*?online-link="([^"]+)"' \
% (str(item.infoLabels['episode']), str(item.infoLabels['season']))
matches = scrapertools.find_multiple_matches(data, patron)
for quality, servidor_num, language, url in matches:
if servidor_num == '94' and not 'stormo.tv' in url:
url = "http://tusfiles.org/?%s" % url
if 'vimeo' in url:
url += "|" + item.url
if "filescdn" in url and url.endswith("htm"):
url += "l"
idioma = IDIOMAS.get(idiomas_videos.get(language))
titulo = "%s [" + idioma + "] (" + calidad_videos.get(quality) + ")"
itemlist.append(item.clone(action="play", title=titulo, url=url, contentType="episode", server=server))
# Enlace Descarga
patron = '<span class="movie-downloadlink-list" id_movies_types="([^"]+)" id_movies_servers="([^"]+)".*?episode="%s' \
'" season="%s" id_lang="([^"]+)".*?online-link="([^"]+)"' \
% (str(item.infoLabels['episode']), str(item.infoLabels['season']))
# patron = '<span class="movie-downloadlink-list" id_movies_types="([^"]+)" id_movies_servers="([^"]+)".*?episode="'+str(item.infoLabels['episode']) +'" season="'+str(item.infoLabels['season']) + '" id_lang="([^"]+)".*?online-link="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for quality, servidor_num, episode, language, url in matches:
idioma = IDIOMAS.get(idiomas_videos.get(language))
titulo = "%s [" + idioma + "] (" + calidad_videos.get(quality) + ")"
itemlist.append(item.clone(action="play", title=titulo, url=url, contentType="episode", server=server))
itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize())
itemlist.sort(key=lambda item: (int(item.infoLabels['episode']), item.title))
try:
from core import tmdb
tmdb.set_infoLabels(itemlist, __modo_grafico__)
except:
pass
return itemlist
def dict_videos():
idiomas_videos = {}
calidad_videos = {}
data = httptools.downloadpage("http://allpeliculas.co/Search/advancedSearch&ajax=1").data
data = data.replace("\n", "").replace("\t", "")
bloque_idioma = scrapertools.find_single_match(data,
'<select name="language".*?<option value="" selected(.*?)</select>')
matches = scrapertools.find_multiple_matches(bloque_idioma, '<option value="([^"]+)" >(.*?)</option>')
for key1, key2 in matches:
idiomas_videos[key1] = unicode(key2, "utf8").capitalize().encode("utf8")
bloque_calidad = scrapertools.find_single_match(data, '<select name="movieTypeId".*?<option value="" selected(.*?)'
'</select>')
matches = scrapertools.find_multiple_matches(bloque_calidad, '<option value="([^"]+)" >(.*?)</option>')
for key1, key2 in matches:
calidad_videos[key1] = key2
return idiomas_videos, calidad_videos
def dict_indices():
indice_genero = {}
indice_alfa = list(string.ascii_uppercase)
indice_alfa.append("0-9")
indice_idioma = {}
indice_year = []
indice_calidad = {}
data = httptools.downloadpage("http://allpeliculas.co/Search/advancedSearch&ajax=1").data
data = data.replace("\n", "").replace("\t", "")
data = scrapertools.decodeHtmlentities(data)
bloque_genero = scrapertools.find_single_match(data, '<select name="movieGenre".*?<option value="" selected(.*?)'
'</select>')
matches = scrapertools.find_multiple_matches(bloque_genero, '<option value="([^"]+)" >(.*?)</option>')
for key1, key2 in matches:
if key2 != "Series":
if key2 == "Mystery":
key2 = "Misterio"
indice_genero[key1] = key2
bloque_year = scrapertools.find_single_match(data, '<select name="movieYear".*?<option value="" selected(.*?)'
'</select>')
matches = scrapertools.find_multiple_matches(bloque_year, '<option value="([^"]+)"')
for key1 in matches:
indice_year.append(key1)
bloque_idioma = scrapertools.find_single_match(data, '<select name="language".*?<option value="" selected(.*?)'
'</select>')
matches = scrapertools.find_multiple_matches(bloque_idioma, '<option value="([^"]+)" >(.*?)</option>')
for key1, key2 in matches:
if key2 == "INGLES":
key2 = "Versión original"
indice_idioma[key1] = unicode(key2, "utf8").capitalize().encode("utf8")
bloque_calidad = scrapertools.find_single_match(data, '<select name="movieTypeId".*?<option value="" selected(.*?)'
'</select>')
matches = scrapertools.find_multiple_matches(bloque_calidad, '<option value="([^"]+)" >(.*?)</option>')
for key1, key2 in matches:
indice_calidad[key1] = key2
return indice_genero, indice_alfa, indice_idioma, indice_year, indice_calidad

37
kodi/channels/alltorrent.json Executable file
View File

@@ -0,0 +1,37 @@
{
"id": "alltorrent",
"name": "Alltorrent",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://imgur.com/sLaXHvp.png",
"version": 1,
"changes": [
{
"date": "26/04/2017",
"description": "Release"
}
],
"categories": [
"torrent",
"movie"
],
"settings": [
{
"id": "modo_grafico",
"type": "bool",
"label": "Buscar información extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}

395
kodi/channels/alltorrent.py Executable file
View File

@@ -0,0 +1,395 @@
# -*- coding: utf-8 -*-
import os
import re
import unicodedata
from threading import Thread
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core import tmdb
from core.item import Item
__modo_grafico__ = config.get_setting('modo_grafico', "ver-pelis")
# Para la busqueda en bing evitando baneos
def browser(url):
import mechanize
# Utilizamos Browser mechanize para saltar problemas con la busqueda en bing
br = mechanize.Browser()
# Browser options
br.set_handle_equiv(False)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(False)
br.set_handle_robots(False)
# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
# Want debugging messages?
# br.set_debug_http(True)
# br.set_debug_redirects(True)
# br.set_debug_responses(True)
# User-Agent (this is cheating, ok?)
# br.addheaders = [('User-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/600.7.12 (KHTML, like Gecko) Version/7.1.7 Safari/537.85.16')]
# br.addheaders =[('Cookie','SRCHD=AF=QBRE; domain=.bing.com; expires=25 de febrero de 2018 13:00:28 GMT+1; MUIDB=3B942052D204686335322894D3086911; domain=www.bing.com;expires=24 de febrero de 2018 13:00:28 GMT+1')]
# Open some site, let's pick a random one, the first that pops in mind
r = br.open(url)
response = r.read()
print response
if "img,divreturn" in response:
r = br.open("http://ssl-proxy.my-addr.org/myaddrproxy.php/" + url)
print "prooooxy"
response = r.read()
return response
api_key = "2e2160006592024ba87ccdf78c28f49f"
api_fankey = "dffe90fba4d02c199ae7a9e71330c987"
def mainlist(item):
logger.info()
itemlist = []
i = 0
global i
itemlist.append(item.clone(title="[COLOR springgreen][B]Todas Las Películas[/B][/COLOR]", action="scraper",
url="http://alltorrent.net/", thumbnail="http://imgur.com/XLqPZoF.png",
fanart="http://imgur.com/v3ChkZu.jpg", contentType="movie"))
itemlist.append(item.clone(title="[COLOR springgreen] Incluyen 1080p[/COLOR]", action="scraper",
url="http://alltorrent.net/rezolucia/1080p/", thumbnail="http://imgur.com/XLqPZoF.png",
fanart="http://imgur.com/v3ChkZu.jpg", contentType="movie"))
itemlist.append(item.clone(title="[COLOR springgreen] Incluyen 720p[/COLOR]", action="scraper",
url="http://alltorrent.net/rezolucia/720p/", thumbnail="http://imgur.com/XLqPZoF.png",
fanart="http://imgur.com/v3ChkZu.jpg", contentType="movie"))
itemlist.append(item.clone(title="[COLOR springgreen] Incluyen Hdrip[/COLOR]", action="scraper",
url="http://alltorrent.net/rezolucia/hdrip/", thumbnail="http://imgur.com/XLqPZoF.png",
fanart="http://imgur.com/v3ChkZu.jpg", contentType="movie"))
itemlist.append(item.clone(title="[COLOR springgreen] Incluyen 3D[/COLOR]", action="scraper",
url="http://alltorrent.net/rezolucia/3d/", thumbnail="http://imgur.com/XLqPZoF.png",
fanart="http://imgur.com/v3ChkZu.jpg", contentType="movie"))
itemlist.append(itemlist[-1].clone(title="[COLOR floralwhite][B]Buscar[/B][/COLOR]", action="search",
thumbnail="http://imgur.com/5EBwccS.png", fanart="http://imgur.com/v3ChkZu.jpg",
contentType="movie", extra="titulo"))
itemlist.append(itemlist[-1].clone(title="[COLOR oldlace] Por Título[/COLOR]", action="search",
thumbnail="http://imgur.com/5EBwccS.png", fanart="http://imgur.com/v3ChkZu.jpg",
contentType="movie", extra="titulo"))
itemlist.append(itemlist[-1].clone(title="[COLOR oldlace] Por Año[/COLOR]", action="search",
thumbnail="http://imgur.com/5EBwccS.png", fanart="http://imgur.com/v3ChkZu.jpg",
contentType="movie", extra="año"))
itemlist.append(itemlist[-1].clone(title="[COLOR oldlace] Por Rating Imdb[/COLOR]", action="search",
thumbnail="http://imgur.com/5EBwccS.png", fanart="http://imgur.com/v3ChkZu.jpg",
contentType="movie", extra="rating"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
if item.extra == "titulo":
item.url = "http://alltorrent.net/?s=" + texto
elif item.extra == "año":
item.url = "http://alltorrent.net/weli/" + texto
else:
item.url = "http://alltorrent.net/imdb/" + texto
if texto != '':
return scraper(item)
def scraper(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
patron = scrapertools.find_multiple_matches(data,
'<div class="browse-movie-wrap col-xs-10 col-sm-4 col-md-5 col-lg-4"><a href="([^"]+)".*?src="([^"]+)".*?alt="([^"]+)".*?rel="tag">([^"]+)</a> ')
for url, thumb, title, year in patron:
title = re.sub(r"\(\d+\)", "", title)
title = ''.join((c for c in unicodedata.normalize('NFD', unicode(title.decode('utf-8'))) if
unicodedata.category(c) != 'Mn')).encode("ascii", "ignore")
titulo = "[COLOR lime]" + title + "[/COLOR]"
title = re.sub(r"!|\/.*", "", title).strip()
new_item = item.clone(action="findvideos", title=titulo, url=url, thumbnail=thumb, fulltitle=title,
contentTitle=title, contentType="movie", library=True)
new_item.infoLabels['year'] = year
itemlist.append(new_item)
## Paginación
next = scrapertools.find_single_match(data, '<li><a href="([^"]+)" rel="nofollow">Next Page')
if len(next) > 0:
url = next
itemlist.append(item.clone(title="[COLOR olivedrab][B]Siguiente >>[/B][/COLOR]", action="scraper", url=url,
thumbnail="http://imgur.com/TExhOJE.png"))
try:
from core import tmdb
tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__)
for item in itemlist:
if not "Siguiente >>" in item.title:
if "0." in str(item.infoLabels['rating']):
item.infoLabels['rating'] = "[COLOR olive]Sin puntuación[/COLOR]"
else:
item.infoLabels['rating'] = "[COLOR yellow]" + str(item.infoLabels['rating']) + "[/COLOR]"
item.title = item.title + " " + str(item.infoLabels['rating'])
except:
pass
for item_tmdb in itemlist:
logger.info(str(item_tmdb.infoLabels['tmdb_id']))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
th = Thread(target=get_art(item))
th.setDaemon(True)
th.start()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
enlaces = scrapertools.find_multiple_matches(data,
'id="modal-quality-\w+"><span>(.*?)</span>.*?class="quality-size">(.*?)</p>.*?href="([^"]+)"')
for calidad, size, url in enlaces:
title = "[COLOR palegreen][B]Torrent[/B][/COLOR]" + " " + "[COLOR chartreuse]" + calidad + "[/COLOR]" + "[COLOR teal] ( [/COLOR]" + "[COLOR forestgreen]" + size + "[/COLOR]" + "[COLOR teal] )[/COLOR]"
itemlist.append(
Item(channel=item.channel, title=title, url=url, action="play", server="torrent", fanart=item.fanart,
thumbnail=item.thumbnail, extra=item.extra, InfoLabels=item.infoLabels, folder=False))
dd = scrapertools.find_single_match(data, 'button-green-download-big".*?href="([^"]+)"><span class="icon-play">')
if dd:
if item.library:
itemlist.append(
Item(channel=item.channel, title="[COLOR floralwhite][B]Online[/B][/COLOR]", url=dd, action="dd_y_o",
thumbnail="http://imgur.com/mRmBIV4.png", fanart=item.extra.split("|")[0],
contentType=item.contentType, extra=item.extra, folder=True))
else:
videolist = servertools.find_video_items(data=str(dd))
for video in videolist:
icon_server = os.path.join(config.get_runtime_path(), "resources", "images", "servers",
"server_" + video.server + ".png")
if not os.path.exists(icon_server):
icon_server = ""
itemlist.append(Item(channel=item.channel, url=video.url, server=video.server,
title="[COLOR floralwhite][B]" + video.server + "[/B][/COLOR]",
thumbnail=icon_server, fanart=item.extra.split("|")[1], action="play",
folder=False))
if item.library and config.get_videolibrary_support() and itemlist:
infoLabels = {'tmdb_id': item.infoLabels['tmdb_id'],
'title': item.infoLabels['title']}
itemlist.append(Item(channel=item.channel, title="Añadir esta película a la videoteca",
action="add_pelicula_to_library", url=item.url, infoLabels=infoLabels,
text_color="0xFFe5ffcc",
thumbnail='http://imgur.com/DNCBjUB.png', extra="library"))
return itemlist
def dd_y_o(item):
itemlist = []
logger.info()
videolist = servertools.find_video_items(data=item.url)
for video in videolist:
icon_server = os.path.join(config.get_runtime_path(), "resources", "images", "servers",
"server_" + video.server + ".png")
if not os.path.exists(icon_server):
icon_server = ""
itemlist.append(Item(channel=item.channel, url=video.url, server=video.server,
title="[COLOR floralwhite][B]" + video.server + "[/B][/COLOR]", thumbnail=icon_server,
fanart=item.extra.split("|")[1], action="play", folder=False))
return itemlist
def fanartv(item, id_tvdb, id, images={}):
headers = [['Content-Type', 'application/json']]
from core import jsontools
if item.contentType == "movie":
url = "http://webservice.fanart.tv/v3/movies/%s?api_key=cab16e262d72fea6a6843d679aa10300" \
% id
else:
url = "http://webservice.fanart.tv/v3/tv/%s?api_key=cab16e262d72fea6a6843d679aa10300" % id_tvdb
try:
data = jsontools.load(scrapertools.downloadpage(url, headers=headers))
if data and not "error message" in data:
for key, value in data.items():
if key not in ["name", "tmdb_id", "imdb_id", "thetvdb_id"]:
images[key] = value
else:
images = []
except:
images = []
return images
def get_art(item):
logger.info()
id = item.infoLabels['tmdb_id']
check_fanart = item.infoLabels['fanart']
if item.contentType != "movie":
tipo_ps = "tv"
else:
tipo_ps = "movie"
if not id:
year = item.extra
otmdb = tmdb.Tmdb(texto_buscado=item.fulltitle, year=year, tipo=tipo_ps)
id = otmdb.result.get("id")
if id == None:
otmdb = tmdb.Tmdb(texto_buscado=item.fulltitle, tipo=tipo_ps)
id = otmdb.result.get("id")
if id == None:
if item.contentType == "movie":
urlbing_imdb = "http://www.bing.com/search?q=%s+%s+tv+series+site:imdb.com" % (
item.fulltitle.replace(' ', '+'), year)
data = browser(urlbing_imdb)
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;|http://ssl-proxy.my-addr.org/myaddrproxy.php/", "", data)
subdata_imdb = scrapertools.find_single_match(data,
'<li class="b_algo">(.*?)h="ID.*?<strong>.*?TV Series')
else:
urlbing_imdb = "http://www.bing.com/search?q=%s+%s+site:imdb.com" % (
item.fulltitle.replace(' ', '+'), year)
data = browser(urlbing_imdb)
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;|http://ssl-proxy.my-addr.org/myaddrproxy.php/", "", data)
subdata_imdb = scrapertools.find_single_match(data, '<li class="b_algo">(.*?)h="ID.*?<strong>')
try:
imdb_id = scrapertools.get_match(subdata_imdb, '<a href=.*?http.*?imdb.com/title/(.*?)/.*?"')
except:
try:
imdb_id = scrapertools.get_match(subdata_imdb,
'<a href=.*?http.*?imdb.com/.*?/title/(.*?)/.*?"')
except:
imdb_id = ""
otmdb = tmdb.Tmdb(external_id=imdb_id, external_source="imdb_id", tipo=tipo_ps, idioma_busqueda="es")
id = otmdb.result.get("id")
if id == None:
if "(" in item.fulltitle:
title = scrapertools.find_single_match(item.fulltitle, '\(.*?\)')
if item.contentType != "movie":
urlbing_imdb = "http://www.bing.com/search?q=%s+%s+tv+series+site:imdb.com" % (
title.replace(' ', '+'), year)
data = browser(urlbing_imdb)
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;|http://ssl-proxy.my-addr.org/myaddrproxy.php/", "",
data)
subdata_imdb = scrapertools.find_single_match(data,
'<li class="b_algo">(.*?)h="ID.*?<strong>.*?TV Series')
else:
urlbing_imdb = "http://www.bing.com/search?q=%s+%s+site:imdb.com" % (
title.replace(' ', '+'), year)
data = browser(urlbing_imdb)
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;|http://ssl-proxy.my-addr.org/myaddrproxy.php/", "",
data)
subdata_imdb = scrapertools.find_single_match(data,
'<li class="b_algo">(.*?)h="ID.*?<strong>')
try:
imdb_id = scrapertools.get_match(subdata_imdb,
'<a href=.*?http.*?imdb.com/title/(.*?)/.*?"')
except:
try:
imdb_id = scrapertools.get_match(subdata_imdb,
'<a href=.*?http.*?imdb.com/.*?/title/(.*?)/.*?"')
except:
imdb_id = ""
otmdb = tmdb.Tmdb(external_id=imdb_id, external_source="imdb_id", tipo=tipo_ps,
idioma_busqueda="es")
id = otmdb.result.get("id")
if not id:
fanart = item.fanart
id_tvdb = ""
imagenes = []
itmdb = tmdb.Tmdb(id_Tmdb=id, tipo=tipo_ps)
images = itmdb.result.get("images")
if images:
for key, value in images.iteritems():
for detail in value:
imagenes.append('http://image.tmdb.org/t/p/original' + detail["file_path"])
if len(imagenes) >= 4:
if imagenes[0] != check_fanart:
item.fanart = imagenes[0]
else:
item.fanart = imagenes[1]
if imagenes[1] != check_fanart and imagenes[1] != item.fanart and imagenes[2] != check_fanart:
item.extra = imagenes[1] + "|" + imagenes[2]
else:
if imagenes[1] != check_fanart and imagenes[1] != item.fanart:
item.extra = imagenes[1] + "|" + imagenes[3]
elif imagenes[2] != check_fanart:
item.extra = imagenes[2] + "|" + imagenes[3]
else:
item.extra = imagenes[3] + "|" + imagenes[3]
elif len(imagenes) == 3:
if imagenes[0] != check_fanart:
item.fanart = imagenes[0]
else:
item.fanart = imagenes[1]
if imagenes[1] != check_fanart and imagenes[1] != item.fanart and imagenes[2] != check_fanart:
item.extra = imagenes[1] + "|" + imagenes[2]
else:
if imagenes[1] != check_fanart and imagenes[1] != item.fanart:
item.extra = imagenes[0] + "|" + imagenes[1]
elif imagenes[2] != check_fanart:
item.extra = imagenes[1] + "|" + imagenes[2]
else:
item.extra = imagenes[1] + "|" + imagenes[1]
elif len(imagenes) == 2:
if imagenes[0] != check_fanart:
item.fanart = imagenes[0]
else:
item.fanart = imagenes[1]
if imagenes[1] != check_fanart and imagenes[1] != item.fanart:
item.extra = imagenes[0] + "|" + imagenes[1]
else:
item.extra = imagenes[1] + "|" + imagenes[0]
elif len(imagenes) == 1:
item.extra = imagenes + "|" + imagenes
else:
item.extra = item.fanart + "|" + item.fanart
images_fanarttv = fanartv(item, id_tvdb, id)
if images_fanarttv:
if item.contentType == "movie":
if images_fanarttv.get("moviedisc"):
item.thumbnail = images_fanarttv.get("moviedisc")[0].get("url")
elif images_fanarttv.get("hdmovielogo"):
item.thumbnail = images_fanarttv.get("hdmovielogo")[0].get("url")
elif images_fanarttv.get("moviethumb"):
item.thumbnail = images_fanarttv.get("moviethumb")[0].get("url")
elif images_fanarttv.get("moviebanner"):
item.thumbnail_ = images_fanarttv.get("moviebanner")[0].get("url")
else:
item.thumbnail = item.thumbnail
else:
if images_fanarttv.get("hdtvlogo"):
item.thumbnail = images_fanarttv.get("hdtvlogo")[0].get("url")
elif images_fanarttv.get("clearlogo"):
item.thumbnail = images_fanarttv.get("hdmovielogo")[0].get("url")
if images_fanarttv.get("tvbanner"):
item.extra = item.extra + "|" + images_fanarttv.get("tvbanner")[0].get("url")
elif images_fanarttv.get("tvthumb"):
item.extra = item.extra + "|" + images_fanarttv.get("tvthumb")[0].get("url")
else:
item.extra = item.extra + "|" + item.thumbnail
else:
item.extra = item.extra + "|" + item.thumbnail

49
kodi/channels/animeflv.json Executable file
View File

@@ -0,0 +1,49 @@
{
"id": "animeflv",
"name": "AnimeFLV",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "animeflv.png",
"banner": "animeflv.png",
"version": 1,
"changes": [
{
"date": "18/05/2017",
"description": "fix ultimos animes, episodios"
},
{
"date": "06/04/2017",
"description": "fix ultimos episodios"
},
{
"date": "01/03/2017",
"description": "fix nueva web"
},
{
"date": "09/07/2016",
"description": "Arreglo viewmode"
}
],
"categories": [
"anime"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_anime",
"type": "bool",
"label": "Incluir en Novedades - Episodios de anime",
"default": true,
"enabled": true,
"visible": true
}
]
}

330
kodi/channels/animeflv.py Executable file
View File

@@ -0,0 +1,330 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from channels import renumbertools
from core import httptools
from core import jsontools
from core import logger
from core import scrapertools
from core.item import Item
HOST = "https://animeflv.net/"
def mainlist(item):
logger.info()
itemlist = list()
itemlist.append(Item(channel=item.channel, action="novedades_episodios", title="Últimos episodios", url=HOST))
itemlist.append(Item(channel=item.channel, action="novedades_anime", title="Últimos animes", url=HOST))
itemlist.append(Item(channel=item.channel, action="listado", title="Animes", url=HOST + "browse?order=title"))
itemlist.append(Item(channel=item.channel, title="Buscar por:"))
itemlist.append(Item(channel=item.channel, action="search", title=" Título"))
itemlist.append(Item(channel=item.channel, action="search_section", title=" Género", url=HOST + "browse",
extra="genre"))
itemlist.append(Item(channel=item.channel, action="search_section", title=" Tipo", url=HOST + "browse",
extra="type"))
itemlist.append(Item(channel=item.channel, action="search_section", title=" Año", url=HOST + "browse",
extra="year"))
itemlist.append(Item(channel=item.channel, action="search_section", title=" Estado", url=HOST + "browse",
extra="status"))
itemlist = renumbertools.show_option(item.channel, itemlist)
return itemlist
def search(item, texto):
logger.info()
itemlist = []
item.url = urlparse.urljoin(HOST, "api/animes/search")
texto = texto.replace(" ", "+")
post = "value=%s" % texto
data = httptools.downloadpage(item.url, post=post).data
try:
dict_data = jsontools.load(data)
for e in dict_data:
if e["id"] != e["last_id"]:
_id = e["last_id"]
else:
_id = e["id"]
url = "%sanime/%s/%s" % (HOST, _id, e["slug"])
title = e["title"]
thumbnail = "%suploads/animes/covers/%s.jpg" % (HOST, e["id"])
new_item = item.clone(action="episodios", title=title, url=url, thumbnail=thumbnail)
if e["type"] != "movie":
new_item.show = title
new_item.context = renumbertools.context(item)
else:
new_item.contentType = "movie"
new_item.contentTitle = title
itemlist.append(new_item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
return itemlist
def search_section(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
patron = 'id="%s_select"[^>]+>(.*?)</select>' % item.extra
data = scrapertools.find_single_match(data, patron)
matches = re.compile('<option value="([^"]+)">(.*?)</option>', re.DOTALL).findall(data)
for _id, title in matches:
url = "%s?%s=%s&order=title" % (item.url, item.extra, _id)
itemlist.append(Item(channel=item.channel, action="listado", title=title, url=url,
context=renumbertools.context(item)))
return itemlist
def newest(categoria):
itemlist = []
if categoria == 'anime':
itemlist = novedades_episodios(Item(url=HOST))
return itemlist
def novedades_episodios(item):
logger.info()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
data = scrapertools.find_single_match(data, '<h2>Últimos episodios</h2>.+?<ul class="ListEpisodios[^>]+>(.*?)</ul>')
matches = re.compile('<a href="([^"]+)"[^>]+>.+?<img src="([^"]+)".+?"Capi">(.*?)</span>'
'<strong class="Title">(.*?)</strong>', re.DOTALL).findall(data)
itemlist = []
for url, thumbnail, str_episode, show in matches:
try:
episode = int(str_episode.replace("Episodio ", ""))
except ValueError:
season = 1
episode = 1
else:
season, episode = renumbertools.numbered_for_tratk(item.channel, item.show, 1, episode)
title = "%s: %sx%s" % (show, season, str(episode).zfill(2))
url = urlparse.urljoin(HOST, url)
thumbnail = urlparse.urljoin(HOST, thumbnail)
new_item = Item(channel=item.channel, action="findvideos", title=title, url=url, show=show, thumbnail=thumbnail,
fulltitle=title)
itemlist.append(new_item)
return itemlist
def novedades_anime(item):
logger.info()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
data = scrapertools.find_single_match(data, '<ul class="ListAnimes[^>]+>(.*?)</ul>')
matches = re.compile('href="([^"]+)".+?<img src="([^"]+)".+?<span class=.+?>(.*?)</span>.+?<h3.+?>(.*?)</h3>.+?'
'(?:</p><p>(.*?)</p>.+?)?</article></li>', re.DOTALL).findall(data)
itemlist = []
for url, thumbnail, _type, title, plot in matches:
url = urlparse.urljoin(HOST, url)
thumbnail = urlparse.urljoin(HOST, thumbnail)
new_item = Item(channel=item.channel, action="episodios", title=title, url=url, thumbnail=thumbnail,
fulltitle=title, plot=plot)
if _type != "Película":
new_item.show = title
new_item.context = renumbertools.context(item)
else:
new_item.contentType = "movie"
new_item.contentTitle = title
itemlist.append(new_item)
return itemlist
def listado(item):
logger.info()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
url_pagination = scrapertools.find_single_match(data, '<li class="active">.*?</li><li><a href="([^"]+)">')
data = scrapertools.find_multiple_matches(data, '<ul class="ListAnimes[^>]+>(.*?)</ul>')
data = "".join(data)
matches = re.compile('<a href="([^"]+)">.+?<img src="([^"]+)".+?<span class=.+?>(.*?)</span>.+?<h3.*?>(.*?)</h3>'
'.*?</p><p>(.*?)</p>', re.DOTALL).findall(data)
itemlist = []
for url, thumbnail, _type, title, plot in matches:
url = urlparse.urljoin(HOST, url)
thumbnail = urlparse.urljoin(HOST, thumbnail)
new_item = Item(channel=item.channel, action="episodios", title=title, url=url, thumbnail=thumbnail,
fulltitle=title, plot=plot)
if _type == "Anime":
new_item.show = title
new_item.context = renumbertools.context(item)
else:
new_item.contentType = "movie"
new_item.contentTitle = title
itemlist.append(new_item)
if url_pagination:
url = urlparse.urljoin(HOST, url_pagination)
title = ">> Pagina Siguiente"
itemlist.append(Item(channel=item.channel, action="listado", title=title, url=url))
return itemlist
def episodios(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
# fix para renumbertools
item.show = scrapertools.find_single_match(data, '<h1 class="Title">(.*?)</h1>')
if item.plot == "":
item.plot = scrapertools.find_single_match(data, 'Description[^>]+><p>(.*?)</p>')
matches = re.compile('href="([^"]+)"><figure><img class="[^"]+" data-original="([^"]+)".+?</h3>'
'<p>(.*?)</p>', re.DOTALL).findall(data)
if matches:
for url, thumb, title in matches:
title = title.strip()
url = urlparse.urljoin(item.url, url)
# thumbnail = item.thumbnail
try:
episode = int(scrapertools.find_single_match(title, "^.+?\s(\d+)$"))
except ValueError:
season = 1
episode = 1
else:
season, episode = renumbertools.numbered_for_tratk(item.channel, item.show, 1, episode)
title = "%s: %sx%s" % (item.title, season, str(episode).zfill(2))
itemlist.append(item.clone(action="findvideos", title=title, url=url, thumbnail=thumb, fulltitle=title,
fanart=item.thumbnail, contentType="episode"))
else:
# no hay thumbnail
matches = re.compile('<a href="(/ver/[^"]+)"[^>]+>(.*?)<', re.DOTALL).findall(data)
for url, title in matches:
title = title.strip()
url = urlparse.urljoin(item.url, url)
thumb = item.thumbnail
try:
episode = int(scrapertools.find_single_match(title, "^.+?\s(\d+)$"))
except ValueError:
season = 1
episode = 1
else:
season, episode = renumbertools.numbered_for_tratk(item.channel, item.show, 1, episode)
title = "%s: %sx%s" % (item.title, season, str(episode).zfill(2))
itemlist.append(item.clone(action="findvideos", title=title, url=url, thumbnail=thumb, fulltitle=title,
fanart=item.thumbnail, contentType="episode"))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
list_videos = scrapertools.find_multiple_matches(data, 'video\[\d\]\s=\s\'<iframe.+?src="([^"]+)"')
list_videos.extend(scrapertools.find_multiple_matches(data, 'href="http://ouo.io/s/y0d65LCP\?s=([^"]+)"'))
# logger.info("data=%s " % list_videos)
aux_url = []
for e in list_videos:
if e.startswith("https://s3.animeflv.com/embed.php?"):
server = scrapertools.find_single_match(e, 'server=(.*?)&')
e = e.replace("embed", "check").replace("https", "http")
data = httptools.downloadpage(e).data.replace("\\", "")
if '{"error": "Por favor intenta de nuevo en unos segundos", "sleep": 3}' in data:
import time
time.sleep(3)
data = httptools.downloadpage(e).data.replace("\\", "")
video_urls = []
if server == "gdrive":
data = jsontools.load(data)
for s in data.get("sources", []):
video_urls.append([s["label"], s["type"], s["file"]])
if video_urls:
video_urls.sort(key=lambda v: int(v[0]))
itemlist.append(item.clone(title="Enlace encontrado en %s" % server, action="play",
video_urls=video_urls))
else:
url = scrapertools.find_single_match(data, '"file":"([^"]+)"')
if url:
itemlist.append(item.clone(title="Enlace encontrado en %s" % server, url=url, action="play"))
else:
aux_url.append(e)
from core import servertools
itemlist.extend(servertools.find_video_items(data=",".join(aux_url)))
for videoitem in itemlist:
videoitem.fulltitle = item.fulltitle
videoitem.channel = item.channel
videoitem.thumbnail = item.thumbnail
return itemlist
def play(item):
logger.info()
itemlist = []
if item.video_urls:
for it in item.video_urls:
title = ".%s %sp [directo]" % (it[1].replace("video/", ""), it[0])
itemlist.append([title, it[2]])
return itemlist
else:
return [item]

37
kodi/channels/animeflv_me.json Executable file
View File

@@ -0,0 +1,37 @@
{
"id": "animeflv_me",
"name": "Animeflv.ME",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://i.imgur.com/x9AdvBx.png",
"banner": "http://i.imgur.com/dTZwCPq.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "05/01/2017",
"description": "Actualizada url de la opción Novedades. Arreglado un error que impedia que se mostrara un solo resultado al realizar busquedas. Limpieza de código"
},
{
"date": "01/07/2016",
"description": "nuevo canal"
}
],
"categories": [
"anime"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
}
]
}

347
kodi/channels/animeflv_me.py Executable file
View File

@@ -0,0 +1,347 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from os import path
from channels import renumbertools
from core import config
from core import filetools
from core import httptools
from core import logger
from core import scrapertools
from core.item import Item
CHANNEL_HOST = "http://animeflv.me/"
CHANNEL_DEFAULT_HEADERS = [
["User-Agent", "Mozilla/5.0"],
["Accept-Encoding", "gzip, deflate"],
["Referer", CHANNEL_HOST]
]
REGEX_NEXT_PAGE = r"class='current'>\d+?</li><li><a href=\"([^']+?)\""
REGEX_TITLE = r'(?:bigChar_a" href=.+?>)(.+?)(?:</a>)'
REGEX_THUMB = r'src="(http://media.animeflv\.me/uploads/thumbs/[^"]+?)"'
REGEX_PLOT = r'<span class="info">Línea de historia:</span><p><span>(.*?)</span>'
REGEX_URL = r'href="(http://animeflv\.me/Anime/[^"]+)">'
REGEX_SERIE = r'{0}.+?{1}([^<]+?)</a><p>(.+?)</p>'.format(REGEX_THUMB, REGEX_URL)
REGEX_EPISODE = r'href="(http://animeflv\.me/Ver/[^"]+?)">(?:<span.+?</script>)?(.+?)</a></td><td>(\d+/\d+/\d+)</td></tr>'
REGEX_GENERO = r'<a href="(http://animeflv\.me/genero/[^\/]+/)">([^<]+)</a>'
def get_url_contents(url):
html = httptools.downloadpage(url, headers=CHANNEL_DEFAULT_HEADERS).data
# Elimina los espacios antes y despues de aperturas y cierres de etiquetas
html = re.sub(r'>\s+<', '><', html)
html = re.sub(r'>\s+', '>', html)
html = re.sub(r'\s+<', '<', html)
return html
def get_cookie_value():
"""
Obtiene las cookies de cloudflare
"""
cookie_file = path.join(config.get_data_path(), 'cookies.dat')
cookie_data = filetools.read(cookie_file)
cfduid = scrapertools.find_single_match(
cookie_data, r"animeflv.*?__cfduid\s+([A-Za-z0-9\+\=]+)")
cfduid = "__cfduid=" + cfduid + ";"
cf_clearance = scrapertools.find_single_match(
cookie_data, r"animeflv.*?cf_clearance\s+([A-Za-z0-9\+\=\-]+)")
cf_clearance = " cf_clearance=" + cf_clearance
cookies_value = cfduid + cf_clearance
return cookies_value
header_string = "|User-Agent=Mozilla/5.0&Referer=http://animeflv.me&Cookie=" + \
get_cookie_value()
def __find_next_page(html):
"""
Busca el enlace a la pagina siguiente
"""
return scrapertools.find_single_match(html, REGEX_NEXT_PAGE)
def __extract_info_from_serie(html):
"""
Extrae la información de una serie o pelicula desde su página
Util para cuando una busqueda devuelve un solo resultado y animeflv.me
redirecciona a la página de este.
"""
title = scrapertools.find_single_match(html, REGEX_TITLE)
title = clean_title(title)
url = scrapertools.find_single_match(html, REGEX_URL)
thumbnail = scrapertools.find_single_match(
html, REGEX_THUMB) + header_string
plot = scrapertools.find_single_match(html, REGEX_PLOT)
return [title, url, thumbnail, plot]
def __sort_by_quality(items):
"""
Ordena los items por calidad en orden decreciente
"""
def func(item):
return int(scrapertools.find_single_match(item.title, r'\[(.+?)\]'))
return sorted(items, key=func, reverse=True)
def clean_title(title):
"""
Elimina el año del nombre de las series o peliculas
"""
year_pattern = r'\([\d -]+?\)'
return re.sub(year_pattern, '', title).strip()
def __find_series(html):
"""
Busca series en un listado, ejemplo: resultados de busqueda, categorias, etc
"""
series = []
# Limitamos la busqueda al listado de series
list_start = html.find('<table class="listing">')
list_end = html.find('</table>', list_start)
list_html = html[list_start:list_end]
for serie in re.finditer(REGEX_SERIE, list_html, re.S):
thumbnail, url, title, plot = serie.groups()
title = clean_title(title)
thumbnail = thumbnail + header_string
plot = scrapertools.htmlclean(plot)
series.append([title, url, thumbnail, plot])
return series
def mainlist(item):
logger.info()
itemlist = list()
itemlist.append(Item(channel=item.channel, action="letras",
title="Por orden alfabético"))
itemlist.append(Item(channel=item.channel, action="generos", title="Por géneros",
url=urlparse.urljoin(CHANNEL_HOST, "ListadeAnime")))
itemlist.append(Item(channel=item.channel, action="series", title="Por popularidad",
url=urlparse.urljoin(CHANNEL_HOST, "/ListadeAnime/MasVisto")))
itemlist.append(Item(channel=item.channel, action="series", title="Novedades",
url=urlparse.urljoin(CHANNEL_HOST, "ListadeAnime/Nuevo")))
itemlist.append(Item(channel=item.channel, action="series", title="Últimos",
url=urlparse.urljoin(CHANNEL_HOST, "ListadeAnime/LatestUpdate")))
itemlist.append(Item(channel=item.channel, action="search", title="Buscar...",
url=urlparse.urljoin(CHANNEL_HOST, "Buscar?s=")))
itemlist = renumbertools.show_option(item.channel, itemlist)
return itemlist
def letras(item):
logger.info()
base_url = 'http://animeflv.me/ListadeAnime?c='
itemlist = []
itemlist.append(Item(channel=item.channel, action="series", title="#",
url=base_url + "#", viewmode="movies_with_plot"))
# Itera sobre las posiciones de las letras en la tabla ascii
# 65 = A, 90 = Z
for i in xrange(65, 91):
letter = chr(i)
logger.debug("title=[{0}], url=[{1}], thumbnail=[]".format(
letter, base_url + letter))
itemlist.append(Item(channel=item.channel, action="series", title=letter,
url=base_url + letter, viewmode="movies_with_plot"))
return itemlist
def generos(item):
logger.info()
itemlist = []
html = get_url_contents(item.url)
generos = re.findall(REGEX_GENERO, html)
for url, genero in generos:
logger.debug(
"title=[{0}], url=[{1}], thumbnail=[]".format(genero, url))
itemlist.append(Item(channel=item.channel, action="series", title=genero, url=url,
plot='', viewmode="movies_with_plot"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "%20")
item.url = "{0}{1}".format(item.url, texto)
html = get_url_contents(item.url)
try:
# Se encontro un solo resultado y se redicciono a la página de la serie
if html.find('<title>Ver') >= 0:
series = [__extract_info_from_serie(html)]
# Se obtuvo una lista de resultados
else:
series = __find_series(html)
items = []
for serie in series:
title, url, thumbnail, plot = serie
logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(
title, url, thumbnail))
items.append(Item(channel=item.channel, action="episodios", title=title,
url=url, thumbnail=thumbnail, plot=plot,
show=title, viewmode="movies_with_plot", context=renumbertools.context(item)))
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
return items
def series(item):
logger.info()
page_html = get_url_contents(item.url)
series = __find_series(page_html)
items = []
for serie in series:
title, url, thumbnail, plot = serie
logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(
title, url, thumbnail))
items.append(Item(channel=item.channel, action="episodios", title=title, url=url,
thumbnail=thumbnail, plot=plot, show=title, viewmode="movies_with_plot",
context=renumbertools.context(item)))
url_next_page = __find_next_page(page_html)
if url_next_page:
items.append(Item(channel=item.channel, action="series", title=">> Página Siguiente",
url=url_next_page, thumbnail="", plot="", folder=True,
viewmode="movies_with_plot"))
return items
def episodios(item):
logger.info()
itemlist = []
html_serie = get_url_contents(item.url)
info_serie = __extract_info_from_serie(html_serie)
plot = info_serie[3] if info_serie else ''
episodes = re.findall(REGEX_EPISODE, html_serie, re.DOTALL)
es_pelicula = False
for url, title, date in episodes:
episode = scrapertools.find_single_match(title, r'Episodio (\d+)')
# El enlace pertenece a un episodio
if episode:
season = 1
episode = int(episode)
season, episode = renumbertools.numbered_for_tratk(
item.channel, item.show, season, episode)
title = "{0}x{1:02d} {2} ({3})".format(
season, episode, "Episodio " + str(episode), date)
# El enlace pertenece a una pelicula
else:
title = "{0} ({1})".format(title, date)
item.url = url
es_pelicula = True
logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(
title, url, item.thumbnail))
itemlist.append(Item(channel=item.channel, action="findvideos", title=title, url=url,
thumbnail=item.thumbnail, plot=plot, show=item.show,
fulltitle="{0} {1}".format(item.show, title),
viewmode="movies_with_plot", folder=True))
# El sistema soporta la videoteca y se encontro por lo menos un episodio
# o pelicula
if config.get_videolibrary_support() and len(itemlist) > 0:
if es_pelicula:
item_title = "Añadir película a la videoteca"
item_action = "add_pelicula_to_library"
item_extra = ""
else:
item_title = "Añadir serie a la videoteca"
item_action = "add_serie_to_library"
item_extra = "episodios"
itemlist.append(Item(channel=item.channel, title=item_title, url=item.url,
action=item_action, extra=item_extra, show=item.show))
if not es_pelicula:
itemlist.append(Item(channel=item.channel, title="Descargar todos los episodios",
url=item.url, action="download_all_episodes", extra="episodios",
show=item.show))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
page_html = get_url_contents(item.url)
regex_api = r'http://player\.animeflv\.me/[^\"]+'
iframe_url = scrapertools.find_single_match(page_html, regex_api)
iframe_html = get_url_contents(iframe_url)
regex_video_list = r'var part = \[([^\]]+)'
videos_html = scrapertools.find_single_match(iframe_html, regex_video_list)
videos = re.findall('"([^"]+)"', videos_html, re.DOTALL)
qualities = ["360", "480", "720", "1080"]
for quality_id, video_url in enumerate(videos):
itemlist.append(Item(channel=item.channel, action="play", url=video_url, show=re.escape(item.show),
title="Ver en calidad [{0}]".format(qualities[quality_id]), plot=item.plot,
folder=True, fulltitle=item.title, viewmode="movies_with_plot"))
return __sort_by_quality(itemlist)

47
kodi/channels/animeflv_ru.json Executable file
View File

@@ -0,0 +1,47 @@
{
"id": "animeflv_ru",
"name": "AnimeFLV.RU",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://i.imgur.com/5nRR9qq.png",
"banner": "animeflv_ru.png",
"version": 1,
"compatible": {
"python": "2.7.9",
"addon_version": "4.2.1"
},
"changes": {
"change": [
{
"date": "06/04/2017",
"description": "fix"
},
{
"date": "01/03/2017",
"description": "fix nueva web"
}
]
},
"categories": [
"anime"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_anime",
"type": "bool",
"label": "Incluir en Novedades - Episodios de anime",
"default": true,
"enabled": true,
"visible": true
}
]
}

270
kodi/channels/animeflv_ru.py Executable file
View File

@@ -0,0 +1,270 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from channels import renumbertools
from core import httptools
from core import jsontools
from core import logger
from core import scrapertools
from core.item import Item
HOST = "https://animeflv.ru/"
def mainlist(item):
logger.info()
itemlist = list()
itemlist.append(Item(channel=item.channel, action="novedades_episodios", title="Últimos episodios", url=HOST))
itemlist.append(Item(channel=item.channel, action="novedades_anime", title="Últimos animes", url=HOST))
itemlist.append(Item(channel=item.channel, action="listado", title="Animes", url=HOST + "animes/nombre/lista"))
itemlist.append(Item(channel=item.channel, title="Buscar por:"))
itemlist.append(Item(channel=item.channel, action="search", title=" Título"))
itemlist.append(Item(channel=item.channel, action="search_section", title=" Género", url=HOST + "animes",
extra="genre"))
itemlist = renumbertools.show_option(item.channel, itemlist)
return itemlist
def clean_title(title):
year_pattern = r'\([\d -]+?\)'
return re.sub(year_pattern, '', title).strip()
def search(item, texto):
logger.info()
itemlist = []
item.url = urlparse.urljoin(HOST, "search_suggest")
texto = texto.replace(" ", "+")
post = "value=%s" % texto
data = httptools.downloadpage(item.url, post=post).data
try:
dict_data = jsontools.load(data)
for e in dict_data:
title = clean_title(scrapertools.htmlclean(e["name"]))
url = e["url"]
plot = e["description"]
thumbnail = HOST + e["thumb"]
new_item = item.clone(action="episodios", title=title, url=url, plot=plot, thumbnail=thumbnail)
if "Pelicula" in e["genre"]:
new_item.contentType = "movie"
new_item.contentTitle = title
else:
new_item.show = title
new_item.context = renumbertools.context(item)
itemlist.append(new_item)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
return itemlist
def search_section(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
patron = 'id="%s_filter"[^>]+><div class="inner">(.*?)</div></div>' % item.extra
data = scrapertools.find_single_match(data, patron)
matches = re.compile('<a href="([^"]+)"[^>]+>(.*?)</a>', re.DOTALL).findall(data)
for url, title in matches:
url = "%s/nombre/lista" % url
itemlist.append(Item(channel=item.channel, action="listado", title=title, url=url,
context=renumbertools.context(item)))
return itemlist
def newest(categoria):
itemlist = []
if categoria == 'anime':
itemlist = novedades_episodios(Item(url=HOST))
return itemlist
def novedades_episodios(item):
logger.info()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
data = scrapertools.find_single_match(data, '<ul class="ListEpisodios[^>]+>(.*?)</ul>')
matches = re.compile('href="([^"]+)"[^>]+>.+?<img src="([^"]+)".+?"Capi">(.*?)</span>'
'<strong class="Title">(.*?)</strong>', re.DOTALL).findall(data)
itemlist = []
for url, thumbnail, str_episode, show in matches:
try:
episode = int(str_episode.replace("Ep. ", ""))
except ValueError:
season = 1
episode = 1
else:
season, episode = renumbertools.numbered_for_tratk(item.channel, item.show, 1, episode)
title = "%s: %sx%s" % (show, season, str(episode).zfill(2))
url = urlparse.urljoin(HOST, url)
thumbnail = urlparse.urljoin(HOST, thumbnail)
new_item = Item(channel=item.channel, action="findvideos", title=title, url=url, show=show, thumbnail=thumbnail,
fulltitle=title)
itemlist.append(new_item)
return itemlist
def novedades_anime(item):
logger.info()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
data = scrapertools.find_single_match(data, '<ul class="ListAnimes[^>]+>(.*?)</ul>')
matches = re.compile('<img src="([^"]+)".+?<a href="([^"]+)">(.*?)</a>', re.DOTALL).findall(data)
itemlist = []
for thumbnail, url, title in matches:
url = urlparse.urljoin(HOST, url)
thumbnail = urlparse.urljoin(HOST, thumbnail)
title = clean_title(title)
new_item = Item(channel=item.channel, action="episodios", title=title, url=url, thumbnail=thumbnail,
fulltitle=title)
new_item.show = title
new_item.context = renumbertools.context(item)
itemlist.append(new_item)
return itemlist
def listado(item):
logger.info()
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
# logger.debug("datito %s" % data)
url_pagination = scrapertools.find_single_match(data, '<li class="current">.*?</li>[\s]<li><a href="([^"]+)">')
data = scrapertools.find_single_match(data, '</div><div class="full">(.*?)<div class="pagination')
matches = re.compile('<img.+?src="([^"]+)".+?<a href="([^"]+)">(.*?)</a>.+?'
'<div class="full item_info genres_info">(.*?)</div>.+?class="full">(.*?)</p>',
re.DOTALL).findall(data)
itemlist = []
for thumbnail, url, title, genres, plot in matches:
title = clean_title(title)
url = urlparse.urljoin(HOST, url)
thumbnail = urlparse.urljoin(HOST, thumbnail)
new_item = Item(channel=item.channel, action="episodios", title=title, url=url, thumbnail=thumbnail,
fulltitle=title, plot=plot)
if "Pelicula Anime" in genres:
new_item.contentType = "movie"
new_item.contentTitle = title
else:
new_item.show = title
new_item.context = renumbertools.context(item)
itemlist.append(new_item)
if url_pagination:
url = urlparse.urljoin(HOST, url_pagination)
title = ">> Pagina Siguiente"
itemlist.append(Item(channel=item.channel, action="listado", title=title, url=url))
return itemlist
def episodios(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
if item.plot == "":
item.plot = scrapertools.find_single_match(data, 'Description[^>]+><p>(.*?)</p>')
data = scrapertools.find_single_match(data, '<div class="Sect Episodes full">(.*?)</div>')
matches = re.compile('<a href="([^"]+)"[^>]+>(.+?)</a', re.DOTALL).findall(data)
for url, title in matches:
title = title.strip()
url = urlparse.urljoin(item.url, url)
thumbnail = item.thumbnail
try:
episode = int(scrapertools.find_single_match(title, "Episodio (\d+)"))
except ValueError:
season = 1
episode = 1
else:
season, episode = renumbertools.numbered_for_tratk(item.channel, item.show, 1, episode)
title = "%s: %sx%s" % (item.title, season, str(episode).zfill(2))
itemlist.append(item.clone(action="findvideos", title=title, url=url, thumbnail=thumbnail, fulltitle=title,
fanart=thumbnail, contentType="episode"))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
_id = scrapertools.find_single_match(item.url, 'https://animeflv.ru/ver/([^/]+)/')
post = "embed_id=%s" % _id
data = httptools.downloadpage("https://animeflv.ru/get_video_info", post=post).data
dict_data = jsontools.load(data)
headers = dict()
headers["Referer"] = item.url
data = httptools.downloadpage("https:" + dict_data["value"], headers=headers).data
dict_data = jsontools.load(data)
list_videos = dict_data["playlist"][0]["sources"]
if isinstance(list_videos, list):
for video in list_videos:
itemlist.append(Item(channel=item.channel, action="play", url=video["file"], show=re.escape(item.show),
title="Ver en calidad [%s]" % video["label"], plot=item.plot, fulltitle=item.title,
thumbnail=item.thumbnail))
else:
for video in list_videos.values():
itemlist.append(Item(channel=item.channel, action="play", url=video["file"], show=re.escape(item.show),
title="Ver en calidad [%s]" % video["label"], plot=item.plot, fulltitle=item.title,
thumbnail=item.thumbnail))
return itemlist

45
kodi/channels/animeid.json Executable file
View File

@@ -0,0 +1,45 @@
{
"id": "animeid",
"name": "Animeid",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "animeid.png",
"banner": "animeid.png",
"version": 1,
"changes": [
{
"date": "17/05/2017",
"description": "Fix novedades y replace en findvideos"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "04/01/16",
"description": "Arreglado problema en findvideos"
}
],
"categories": [
"anime"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_anime",
"type": "bool",
"label": "Incluir en Novedades - Episodios de anime",
"default": true,
"enabled": true,
"visible": true
}
]
}

355
kodi/channels/animeid.py Executable file
View File

@@ -0,0 +1,355 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core.item import Item
CHANNEL_HOST = "http://animeid.tv/"
def mainlist(item):
logger.info()
itemlist = list()
itemlist.append(
Item(channel=item.channel, action="novedades_series", title="Últimas series", url="http://www.animeid.tv/"))
itemlist.append(Item(channel=item.channel, action="novedades_episodios", title="Últimos episodios",
url="http://www.animeid.tv/", viewmode="movie_with_plot"))
itemlist.append(
Item(channel=item.channel, action="generos", title="Listado por genero", url="http://www.animeid.tv/"))
itemlist.append(
Item(channel=item.channel, action="letras", title="Listado alfabetico", url="http://www.animeid.tv/"))
itemlist.append(Item(channel=item.channel, action="search", title="Buscar..."))
return itemlist
def newest(categoria):
itemlist = []
item = Item()
try:
if categoria == 'anime':
item.url = "http://animeid.tv/"
itemlist = novedades_episodios(item)
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
return itemlist
# todo ARREGLAR
def search(item, texto):
logger.info()
itemlist = []
if item.url == "":
item.url = "http://www.animeid.tv/ajax/search?q="
texto = texto.replace(" ", "+")
item.url = item.url + texto
try:
headers = []
headers.append(
["User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0"])
headers.append(["Referer", "http://www.animeid.tv/"])
headers.append(["X-Requested-With", "XMLHttpRequest"])
data = scrapertools.cache_page(item.url, headers=headers)
data = data.replace("\\", "")
logger.debug("data=" + data)
patron = '{"id":"([^"]+)","text":"([^"]+)","date":"[^"]*","image":"([^"]+)","link":"([^"]+)"}'
matches = re.compile(patron, re.DOTALL).findall(data)
for id, scrapedtitle, scrapedthumbnail, scrapedurl in matches:
title = scrapedtitle
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = scrapedthumbnail
plot = ""
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="episodios", title=title, url=url, thumbnail=thumbnail, plot=plot,
show=title, viewmode="movie_with_plot"))
return itemlist
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def novedades_series(item):
logger.info()
# Descarga la pagina
data = httptools.downloadpage(item.url).data
data = scrapertools.get_match(data, '<section class="series">(.*?)</section>')
patronvideos = '<li><a href="([^"]+)"><span class="tipo\d+">([^<]+)</span><strong>([^<]+)</strong>'
matches = re.compile(patronvideos, re.DOTALL).findall(data)
itemlist = []
for url, tipo, title in matches:
scrapedtitle = title + " (" + tipo + ")"
scrapedurl = urlparse.urljoin(item.url, url)
scrapedthumbnail = ""
scrapedplot = ""
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(Item(channel=item.channel, action="episodios", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, show=title, viewmode="movie_with_plot"))
return itemlist
def novedades_episodios(item):
logger.info()
# Descarga la pagina
# <article> <a href="/ver/uchuu-kyoudai-35"> <header>Uchuu Kyoudai #35</header> <figure><img src="http://static.animeid.com/art/uchuu-kyoudai/normal/b4934a1d.jpg" class="cover" alt="Uchuu Kyoudai" width="250" height="140" /></figure><div class="mask"></div> <aside><span class="p"><strong>Reproducciones: </strong>306</span> <span class="f"><strong>Favoritos: </strong>0</span></aside> </a> <p>Una noche en el año 2006, cuando eran jovenes, los dos hermanos Mutta (el mayor) y Hibito (el menor) vieron un OVNI que hiba en dirección hacia la luna. Esa misma noche decidieron que ellos se convertirian en astronautas y irian al espacio exterior. En el año 2050, Hibito se ha convertido en astronauta y que ademas está incluido en una misión que irá a la luna. En cambio Mutta siguió una carrera mas tradicional, y terminó trabajando en una compañia de fabricación de automoviles. Sin embargo, Mutta termina arruinando su carrera por ciertos problemas que tiene con su jefe. Ahora bien, no sólo perdió su trabajo si no que fue incluido en la lista negra de la industria laboral. Pueda ser que esta sea su unica oportunidad que tenga Mutta de volver a perseguir su sueño de la infancia y convertirse en astronauta, al igual que su perqueño hermano Hibito.</p> </article>
# <img pagespeed_high_res_src="
data = httptools.downloadpage(item.url).data
data = scrapertools.get_match(data, '<section class="lastcap">(.*?)</section>')
patronvideos = '<a href="([^"]+)">[^<]+<header>([^<]+)</header>[^<]+<figure><img[^>]+src="([^"]+)"[\s\S]+?<p>(.+?)</p>'
matches = re.compile(patronvideos, re.DOTALL).findall(data)
itemlist = []
for url, title, thumbnail, plot in matches:
scrapedtitle = scrapertools.entityunescape(title)
scrapedurl = urlparse.urljoin(item.url, url)
scrapedthumbnail = thumbnail
scrapedplot = plot
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
episodio = scrapertools.get_match(scrapedtitle, '\s+#(.*?)$')
contentTitle = scrapedtitle.replace('#' + episodio, '')
itemlist.append(Item(channel=item.channel, action="findvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot,
hasContentDetails=True, contentSeason=1, contentTitle=contentTitle))
return itemlist
def generos(item):
logger.info()
# Descarga la pagina
data = httptools.downloadpage(item.url).data
data = scrapertools.get_match(data, '<div class="generos">(.*?)</div>')
patronvideos = '<li> <a href="([^"]+)">([^<]+)</a>'
matches = re.compile(patronvideos, re.DOTALL).findall(data)
itemlist = []
for url, title in matches:
scrapedtitle = title
scrapedurl = urlparse.urljoin(item.url, url)
scrapedthumbnail = ""
scrapedplot = ""
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="series", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
plot=scrapedplot, show=title, viewmode="movie_with_plot"))
return itemlist
def letras(item):
logger.info()
# Descarga la pagina
data = httptools.downloadpage(item.url).data
data = scrapertools.get_match(data, '<ul id="letras">(.*?)</ul>')
patronvideos = '<li> <a href="([^"]+)">([^<]+)</a>'
matches = re.compile(patronvideos, re.DOTALL).findall(data)
itemlist = []
for url, title in matches:
scrapedtitle = title
scrapedurl = urlparse.urljoin(item.url, url)
scrapedthumbnail = ""
scrapedplot = ""
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="series", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
plot=scrapedplot, show=title, viewmode="movie_with_plot"))
return itemlist
def series(item):
logger.info()
# Descarga la pagina
data = httptools.downloadpage(item.url).data
logger.debug("datito %s" % data)
'''
<article class="item">
<a href="/aoi-sekai-no-chuushin-de">
<header>Aoi Sekai no Chuushin de</header>
<figure>
<img src="http://static.animeid.com/art/aoi-sekai-no-chuushin-de/cover/0077cb45.jpg" width="116"
height="164" />
</figure>
<div class="mask"></div>
</a>
<p>
El Reino de Segua ha ido perdiendo la guerra contra el Imperio de Ninterdo pero la situación ha cambiado
con la aparición de un chico llamado Gear. Todos los personajes son parodias de protas de videojuegos de
Nintendo y Sega respectivamente, como lo son Sonic the Hedgehog, Super Mario Bros., The Legend of Zelda,
etc.
</p>
</article>
'''
patron = '<article class="item"[^<]+'
patron += '<a href="([^"]+)"[^<]+<header>([^<]+)</header[^<]+'
patron += '<figure><img[\sa-z_]+src="([^"]+)"[^<]+</figure><div class="mask"></div></a>[^<]+<p>(.*?)<'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for url, title, thumbnail, plot in matches:
scrapedtitle = title
scrapedurl = urlparse.urljoin(item.url, url)
scrapedthumbnail = thumbnail
scrapedplot = plot
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(Item(channel=item.channel, action="episodios", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, show=scrapedtitle,
viewmode="movie_with_plot"))
itemlist = sorted(itemlist, key=lambda it: it.title)
try:
page_url = scrapertools.get_match(data, '<li><a href="([^"]+)">&gt;</a></li>')
itemlist.append(Item(channel=item.channel, action="series", title=">> Página siguiente",
url=urlparse.urljoin(item.url, page_url), viewmode="movie_with_plot", thumbnail="",
plot=""))
except:
pass
return itemlist
def episodios(item, final=True):
logger.info()
# Descarga la pagina
body = httptools.downloadpage(item.url).data
try:
scrapedplot = scrapertools.get_match(body, '<meta name="description" content="([^"]+)"')
except:
pass
try:
scrapedthumbnail = scrapertools.get_match(body, '<link rel="image_src" href="([^"]+)"')
except:
pass
data = scrapertools.get_match(body, '<ul id="listado">(.*?)</ul>')
patron = '<li><a href="([^"]+)">(.*?)</a></li>'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for url, title in matches:
scrapedtitle = scrapertools.htmlclean(title)
try:
episodio = scrapertools.get_match(scrapedtitle, "Capítulo\s+(\d+)")
titulo_limpio = re.compile("Capítulo\s+(\d+)\s+", re.DOTALL).sub("", scrapedtitle)
if len(episodio) == 1:
scrapedtitle = "1x0" + episodio + " - " + titulo_limpio
else:
scrapedtitle = "1x" + episodio + " - " + titulo_limpio
except:
pass
scrapedurl = urlparse.urljoin(item.url, url)
# scrapedthumbnail = ""
# scrapedplot = ""
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(Item(channel=item.channel, action="findvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, show=item.show))
try:
next_page = scrapertools.get_match(body, '<a href="([^"]+)">\&gt\;</a>')
next_page = urlparse.urljoin(item.url, next_page)
item2 = Item(channel=item.channel, action="episodios", title=item.title, url=next_page,
thumbnail=item.thumbnail, plot=item.plot, show=item.show, viewmode="movie_with_plot")
itemlist.extend(episodios(item2, final=False))
except:
import traceback
logger.error(traceback.format_exc())
if final and config.get_videolibrary_support():
itemlist.append(Item(channel=item.channel, title="Añadir esta serie a la videoteca", url=item.url,
action="add_serie_to_library", extra="episodios", show=item.show))
itemlist.append(Item(channel=item.channel, title="Descargar todos los episodios de la serie", url=item.url,
action="download_all_episodes", extra="episodios", show=item.show))
return itemlist
def findvideos(item):
logger.info()
data = httptools.downloadpage(item.url).data
itemlist = []
url_anterior = scrapertools.find_single_match(data, '<li class="b"><a href="([^"]+)">« Capítulo anterior')
url_siguiente = scrapertools.find_single_match(data, '<li class="b"><a href="([^"]+)">Siguiente capítulo »')
data = scrapertools.find_single_match(data, '<ul id="partes">(.*?)</ul>')
data = data.replace("\\/", "/")
data = data.replace("%3A", ":")
data = data.replace("%2F", "/")
logger.info("data=" + data)
# http%3A%2F%2Fwww.animeid.moe%2Fstream%2F41TLmCj7_3q4BQLnfsban7%2F1440956023.mp4
# http://www.animeid.moe/stream/41TLmCj7_3q4BQLnfsban7/1440956023.mp4
# http://www.animeid.tv/stream/oiW0uG7yqBrg5TVM5Cm34n/1385370686.mp4
patron = '(http://www.animeid.tv/stream/[^/]+/\d+.[a-z0-9]+)'
matches = re.compile(patron, re.DOTALL).findall(data)
encontrados = set()
for url in matches:
if url not in encontrados:
itemlist.append(
Item(channel=item.channel, action="play", title="[directo]", server="directo", url=url, thumbnail="",
plot="", show=item.show, folder=False))
encontrados.add(url)
from core import servertools
itemlist.extend(servertools.find_video_items(data=data))
for videoitem in itemlist:
videoitem.channel = item.channel
videoitem.action = "play"
videoitem.folder = False
videoitem.title = "[" + videoitem.server + "]"
if url_anterior:
title_anterior = url_anterior.strip("/v/").replace('-', ' ').strip('.html')
itemlist.append(Item(channel=item.channel, action="findvideos", title="Anterior: " + title_anterior,
url=CHANNEL_HOST + url_anterior, thumbnail=item.thumbnail, plot=item.plot, show=item.show,
fanart=item.thumbnail, folder=True))
if url_siguiente:
title_siguiente = url_siguiente.strip("/v/").replace('-', ' ').strip('.html')
itemlist.append(Item(channel=item.channel, action="findvideos", title="Siguiente: " + title_siguiente,
url=CHANNEL_HOST + url_siguiente, thumbnail=item.thumbnail, plot=item.plot, show=item.show,
fanart=item.thumbnail, folder=True))
return itemlist

28
kodi/channels/animeshd.json Executable file
View File

@@ -0,0 +1,28 @@
{
"id": "animeshd",
"name": "AnimesHD",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "https://s21.postimg.org/b43i3ljav/animeshd.png",
"banner": "https://s4.postimg.org/lulxulmql/animeshd-banner.png",
"version": 1,
"changes": [
{
"date": "03/06/2017",
"description": "limpieza de codigo"
},
{
"date": "25/05/2017",
"description": "cambios esteticos"
},
{
"date": "19/05/2017",
"description": "First release"
}
],
"categories": [
"latino",
"anime"
]
}

190
kodi/channels/animeshd.py Executable file
View File

@@ -0,0 +1,190 @@
# -*- coding: utf-8 -*-
import re
import urllib
from core import httptools
from core import logger
from core import scrapertools
from core.item import Item
tgenero = {"Comedia": "https://s7.postimg.org/ne9g9zgwb/comedia.png",
"Drama": "https://s16.postimg.org/94sia332d/drama.png",
"Acción": "https://s3.postimg.org/y6o9puflv/accion.png",
"Aventura": "https://s10.postimg.org/6su40czih/aventura.png",
"Romance": "https://s15.postimg.org/fb5j8cl63/romance.png",
"Ciencia ficción": "https://s9.postimg.org/diu70s7j3/cienciaficcion.png",
"Terror": "https://s7.postimg.org/yi0gij3gb/terror.png",
"Fantasía": "https://s13.postimg.org/65ylohgvb/fantasia.png",
"Misterio": "https://s1.postimg.org/w7fdgf2vj/misterio.png",
"Crimen": "https://s4.postimg.org/6z27zhirx/crimen.png",
"Hentai": "https://s29.postimg.org/aamrngu2f/hentai.png",
"Magia": "https://s9.postimg.org/nhkfzqffj/magia.png",
"Psicológico": "https://s13.postimg.org/m9ghzr86f/psicologico.png",
"Sobrenatural": "https://s9.postimg.org/6hxbvd4ov/sobrenatural.png",
"Torneo": "https://s2.postimg.org/ajoxkk9ih/torneo.png",
"Thriller": "https://s22.postimg.org/5y9g0jsu9/thriller.png",
"Otros": "https://s30.postimg.org/uj5tslenl/otros.png"}
host = "http://www.animeshd.tv"
headers = [['User-Agent', 'Mozilla/50.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
['Referer', host]]
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Ultimas",
action="lista",
thumbnail='https://s22.postimg.org/cb7nmhwv5/ultimas.png',
fanart='https://s22.postimg.org/cb7nmhwv5/ultimas.png',
url=host + '/ultimos'
))
itemlist.append(item.clone(title="Todas",
action="lista",
thumbnail='https://s18.postimg.org/fwvaeo6qh/todas.png',
fanart='https://s18.postimg.org/fwvaeo6qh/todas.png',
url=host + '/buscar?t=todos&q='
))
itemlist.append(item.clone(title="Generos",
action="generos",
url=host,
thumbnail='https://s3.postimg.org/5s9jg2wtf/generos.png',
fanart='https://s3.postimg.org/5s9jg2wtf/generos.png'
))
itemlist.append(item.clone(title="Buscar",
action="search",
url=host + '/buscar?t=todos&q=',
thumbnail='https://s30.postimg.org/pei7txpa9/buscar.png',
fanart='https://s30.postimg.org/pei7txpa9/buscar.png'
))
return itemlist
def get_source(url):
logger.info()
data = httptools.downloadpage(url).data
data = re.sub(r'\n|\r|\t|&nbsp;|<br>|\s{2,}|"|\(|\)', "", data)
return data
def lista(item):
logger.info()
itemlist = []
post = ''
if item.extra in ['episodios']:
post = {'tipo': 'episodios', '_token': 'rAqVX74O9HVHFFigST3M9lMa5VL7seIO7fT8PBkl'}
post = urllib.urlencode(post)
data = get_source(item.url)
patron = 'class=anime><div class=cover style=background-image: url(.*?)>.*?<a href=(.*?)><h2>(.*?)<\/h2><\/a><\/div>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
url = scrapedurl
thumbnail = host + scrapedthumbnail
title = scrapedtitle
itemlist.append(item.clone(action='episodios',
title=title,
url=url,
thumbnail=thumbnail,
contentSerieName=title
))
# Paginacion
next_page = scrapertools.find_single_match(data,
'<li class=active><span>.*?<\/span><\/li><li><a href=(.*?)>.*?<\/a><\/li>')
next_page_url = scrapertools.decodeHtmlentities(next_page)
if next_page_url != "":
itemlist.append(Item(channel=item.channel,
action="lista",
title=">> Página siguiente",
url=next_page_url,
thumbnail='https://s16.postimg.org/9okdu7hhx/siguiente.png'
))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = item.url + texto
try:
if texto != '':
return lista(item)
else:
return []
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def generos(item):
logger.info()
itemlist = []
data = get_source(item.url)
patron = '<li class=><a href=http:\/\/www\.animeshd\.tv\/genero\/(.*?)>(.*?)<\/a><\/li>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
title = scrapertools.decodeHtmlentities(scrapedtitle)
if title == 'Recuentos de la vida':
title = 'Otros'
genero = scrapertools.decodeHtmlentities(scrapedurl)
thumbnail = ''
if title in tgenero:
thumbnail = tgenero[title]
url = 'http://www.animeshd.tv/genero/%s' % genero
itemlist.append(item.clone(action='lista', title=title, url=url, thumbnail=thumbnail))
return itemlist
def episodios(item):
logger.info()
itemlist = []
data = get_source(item.url)
patron = '<li id=epi-.*? class=list-group-item ><a href=(.*?) class=badge.*?width=25 title=(.*?)> <\/span>(.*?)<\/li>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedlang, scrapedtitle in matches:
language = scrapedlang
title = scrapedtitle + ' (%s)' % language
url = scrapedurl
itemlist.append(item.clone(title=title, url=url, action='findvideos', language=language))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
data = get_source(item.url)
patron = '<iframe.*?src=(.*?) frameborder=0'
matches = re.compile(patron, re.DOTALL).findall(data)
for video_url in matches:
data = get_source(video_url)
data = data.replace("'", '')
patron = 'file:(.*?),label:(.*?),type'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedquality in matches:
url = scrapedurl
quality = scrapedquality
title = item.contentSerieName + ' (%s)' % quality
itemlist.append(item.clone(action='play', title=title, url=url, quality=quality))
return itemlist

24
kodi/channels/anitoonstv.json Executable file
View File

@@ -0,0 +1,24 @@
{
"id": "anitoonstv",
"name": "AniToons TV",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://i.imgur.com/9Zu5NBc.png",
"banner": "http://i.imgur.com/JQSXCaB.png",
"version": 1,
"changes": [
{
"date": "13/06/2017",
"description": "Arreglado problema en nombre de servidores"
},
{
"date": "02/06/2017",
"description": "Primera Versión"
}
],
"categories": [
"tvshow",
"latino"
]
}

173
kodi/channels/anitoonstv.py Executable file
View File

@@ -0,0 +1,173 @@
# -*- coding: utf-8 -*-
import re
from channels import renumbertools
from channelselector import get_thumb
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core import tmdb
from core.item import Item
host = "http://www.anitoonstv.com"
def mainlist(item):
logger.info()
thumb_series = get_thumb("thumb_channels_tvshow.png")
itemlist = list()
itemlist.append(Item(channel=item.channel, action="lista", title="Anime", url=host,
thumbnail=thumb_series))
itemlist.append(Item(channel=item.channel, action="lista", title="Series Animadas", url=host,
thumbnail=thumb_series))
itemlist.append(Item(channel=item.channel, action="lista", title="Novedades", url=host,
thumbnail=thumb_series))
itemlist.append(Item(channel=item.channel, action="lista", title="Pokemon", url=host,
thumbnail=thumb_series))
itemlist = renumbertools.show_option(item.channel, itemlist)
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
if 'Novedades' in item.title:
patron_cat = '<div class="activos"><h3>(.+?)<\/h2><\/a><\/div>'
patron = '<a href="(.+?)"><h2><span>(.+?)<\/span>'
else:
patron_cat = '<li><a href=.+?>'
patron_cat += str(item.title)
patron_cat += '<\/a><div>(.+?)<\/div><\/li>'
patron = "<a href='(.+?)'>(.+?)<\/a>"
data = scrapertools.find_single_match(data, patron_cat)
matches = scrapertools.find_multiple_matches(data, patron)
for link, name in matches:
if "Novedades" in item.title:
url = link
title = name.capitalize()
else:
url = host + link
title = name
if ":" in title:
cad = title.split(":")
show = cad[0]
else:
if "(" in title:
cad = title.split("(")
if "Super" in title:
show = cad[1]
show = show.replace(")", "")
else:
show = cad[0]
else:
show = title
if "&" in show:
cad = title.split("xy")
show = cad[0]
itemlist.append(
item.clone(title=title, url=url, plot=show, action="episodios", show=show,
context=renumbertools.context(item)))
tmdb.set_infoLabels(itemlist)
return itemlist
def episodios(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
patron = '<div class="pagina">(.+?)<\/div><div id="fade".+?>'
data = scrapertools.find_single_match(data, patron)
patron_caps = "<a href='(.+?)'>Capitulo: (.+?) - (.+?)<\/a>"
matches = scrapertools.find_multiple_matches(data, patron_caps)
show = scrapertools.find_single_match(data, '<span>Titulo.+?<\/span>(.+?)<br><span>')
scrapedthumbnail = scrapertools.find_single_match(data, "<img src='(.+?)'.+?>")
scrapedplot = scrapertools.find_single_match(data, '<span>Descripcion.+?<\/span>(.+?)<br>')
i = 0
temp = 0
for link, cap, name in matches:
if int(cap) == 1:
temp = temp + 1
if int(cap) < 10:
cap = "0" + cap
season = temp
episode = int(cap)
season, episode = renumbertools.numbered_for_tratk(
item.channel, item.show, season, episode)
date = name
title = "{0}x{1:02d} {2} ({3})".format(
season, episode, "Episodio " + str(episode), date)
# title = str(temp)+"x"+cap+" "+name
url = host + "/" + link
if "NO DISPONIBLE" in name:
name = name
else:
itemlist.append(Item(channel=item.channel, action="findvideos", title=title, thumbnail=scrapedthumbnail,
plot=scrapedplot, url=url, show=show))
if config.get_videolibrary_support() and len(itemlist) > 0:
itemlist.append(Item(channel=item.channel, title="Añadir esta serie a la videoteca", url=item.url,
action="add_serie_to_library", extra="episodios", show=show))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data1 = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
data_vid = scrapertools.find_single_match(data1, '<div class="videos">(.+?)<\/div><div .+?>')
# name = scrapertools.find_single_match(data,'<span>Titulo.+?<\/span>([^<]+)<br>')
scrapedplot = scrapertools.find_single_match(data, '<br><span>Descrip.+?<\/span>([^<]+)<br>')
scrapedthumbnail = scrapertools.find_single_match(data, '<div class="caracteristicas"><img src="([^<]+)">')
itemla = scrapertools.find_multiple_matches(data_vid, '<div class="serv">.+?-(.+?)-(.+?)<\/div><.+? src="(.+?)"')
for server, quality, url in itemla:
if "Calidad Alta" in quality:
quality = quality.replace("Calidad Alta", "HQ")
server = server.lower()
server = server.strip()
if "ok" in server:
server = 'okru'
itemlist.append(
item.clone(url=url, action="play", server=server, contentQuality=quality, thumbnail=scrapedthumbnail,
plot=scrapedplot, title="Enlace encontrado en %s: [%s ]" % (server.capitalize(), quality)))
return itemlist
def play(item):
logger.info()
itemlist = []
# Buscamos video por servidor ...
devuelve = servertools.findvideosbyserver(item.url, item.server)
if not devuelve:
# ...sino lo encontramos buscamos en todos los servidores disponibles
devuelve = servertools.findvideos(item.url, skip=True)
if devuelve:
# logger.debug(devuelve)
itemlist.append(Item(channel=item.channel, title=item.contentTitle, action="play", server=devuelve[0][2],
url=devuelve[0][1], thumbnail=item.thumbnail, folder=False))
return itemlist

View File

@@ -0,0 +1,59 @@
{
"id": "areadocumental",
"name": "Area-Documental",
"language": "es",
"adult": false,
"active": true,
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "16/02/2017",
"description": "Canal reparado ya que no funcionaban los enlaces"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"banner": "areadocumental.png",
"thumbnail": "areadocumental.png",
"categories": [
"documentary"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_documentales",
"type": "bool",
"label": "Incluir en Novedades - Documentales",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 3,
"enabled": true,
"visible": true,
"lvalues": [
"Sin color",
"Perfil 3",
"Perfil 2",
"Perfil 1"
]
}
]
}

205
kodi/channels/areadocumental.py Executable file
View File

@@ -0,0 +1,205 @@
# -*- coding: utf-8 -*-
import urllib
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core.item import Item
host = "http://www.area-documental.com"
__perfil__ = int(config.get_setting('perfil', "areadocumental"))
# Fijar perfil de color
perfil = [['', '', ''],
['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE']]
color1, color2, color3 = perfil[__perfil__]
def mainlist(item):
logger.info()
itemlist = []
item.text_color = color1
itemlist.append(item.clone(title="Novedades", action="entradas",
url="http://www.area-documental.com/resultados-reciente.php?buscar=&genero=",
fanart="http://i.imgur.com/Q7fsFI6.png"))
itemlist.append(item.clone(title="Destacados", action="entradas",
url="http://www.area-documental.com/resultados-destacados.php?buscar=&genero=",
fanart="http://i.imgur.com/Q7fsFI6.png"))
itemlist.append(item.clone(title="Categorías", action="cat", url="http://www.area-documental.com/index.php",
fanart="http://i.imgur.com/Q7fsFI6.png"))
itemlist.append(item.clone(title="Ordenados por...", action="indice", fanart="http://i.imgur.com/Q7fsFI6.png"))
itemlist.append(item.clone(title="Buscar...", action="search"))
itemlist.append(item.clone(title="Configurar canal", action="configuracion", text_color="gold"))
return itemlist
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
platformtools.itemlist_refresh()
return ret
def search(item, texto):
logger.info()
item.url = "http://www.area-documental.com/resultados.php?buscar=%s&genero=&x=0&y=0" % texto
item.action = "entradas"
try:
itemlist = entradas(item)
return itemlist
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == "documentales":
item.url = "http://www.area-documental.com/resultados-reciente.php?buscar=&genero="
item.action = "entradas"
itemlist = entradas(item)
if itemlist[-1].action == "entradas":
itemlist.pop()
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def indice(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Título", action="entradas",
url="http://www.area-documental.com/resultados-titulo.php?buscar=&genero="))
itemlist.append(item.clone(title="Año", action="entradas",
url="http://www.area-documental.com/resultados-anio.php?buscar=&genero="))
return itemlist
def cat(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
bloque = scrapertools.find_single_match(data, '<ul class="menu">(.*?)</nav>')
matches = scrapertools.find_multiple_matches(bloque, "<li>.*?<a href='([^']+)'.*?>(.*?)</a>")
for scrapedurl, scrapedtitle in matches:
scrapedurl = host + "/" + scrapedurl
if not "span" in scrapedtitle:
scrapedtitle = "[COLOR gold] **" + scrapedtitle + "**[/COLOR]"
itemlist.append(item.clone(action="entradas", title=scrapedtitle, url=scrapedurl))
else:
scrapedtitle = scrapertools.htmlclean(scrapedtitle)
itemlist.append(item.clone(action="entradas", title=scrapedtitle, url=scrapedurl))
return itemlist
def entradas(item):
logger.info()
itemlist = []
item.text_color = color2
data = httptools.downloadpage(item.url).data
data = scrapertools.unescape(data)
next_page = scrapertools.find_single_match(data, '<a href="([^"]+)"> ></a>')
if next_page != "":
data2 = scrapertools.unescape(httptools.downloadpage(host + next_page).data)
data += data2
else:
data2 = ""
data = data.replace("\n", "").replace("\t", "")
patron = '<div id="peliculas">.*?<a href="([^"]+)".*?<img src="([^"]+)".*?' \
'target="_blank">(.*?)</a>(.*?)<p>(.*?)</p>' \
'.*?</strong>: (.*?)<strong>.*?</strong>(.*?)</div>'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, scrapedtitle, year, scrapedplot, genero, extra in matches:
infolab = {'plot': scrapedplot, 'genre': genero}
scrapedurl = host + "/" + scrapedurl
scrapedthumbnail = host + urllib.quote(scrapedthumbnail)
title = scrapedtitle
if "full_hd" in extra:
scrapedtitle += " [COLOR gold][3D][/COLOR]"
elif "720" in extra:
scrapedtitle += " [COLOR gold][720p][/COLOR]"
else:
scrapedtitle += " [COLOR gold][SD][/COLOR]"
year = year.replace("\xc2\xa0", "").replace(" ", "")
if not year.isspace() and year != "":
infolab['year'] = int(year)
scrapedtitle += " (" + year + ")"
itemlist.append(item.clone(action="findvideos", title=scrapedtitle, fulltitle=title,
url=scrapedurl, thumbnail=scrapedthumbnail, infoLabels=infolab))
next_page = scrapertools.find_single_match(data2, '<a href="([^"]+)"> ></a>')
if next_page:
itemlist.append(item.clone(action="entradas", title=">> Página Siguiente", url=host + next_page,
text_color=color3))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
subs = scrapertools.find_multiple_matches(data, 'file: "(/webvtt[^"]+)".*?label: "([^"]+)"')
patron = 'file:\s*"(http://[^/]*/Videos/[^"]+)",\s*label:\s*"([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for url, quality in matches:
url += "|User-Agent=%s&Referer=%s" \
% ("Mozilla/5.0 (Windows NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0", item.url)
for url_sub, label in subs:
url_sub = host + urllib.quote(url_sub)
title = "Ver video en [[COLOR %s]%s[/COLOR]] Sub %s" % (color3, quality, label)
itemlist.append(item.clone(action="play", server="directo", title=title,
url=url, subtitle=url_sub, extra=item.url, calidad=quality))
return itemlist
def play(item):
logger.info()
itemlist = []
try:
from core import filetools
ficherosubtitulo = filetools.join(config.get_data_path(), 'subtitulo_areadocu.srt')
if filetools.exists(ficherosubtitulo):
try:
filetools.remove(ficherosubtitulo)
except IOError:
logger.error("Error al eliminar el archivo " + ficherosubtitulo)
raise
data = httptools.downloadpage(item.subtitle, headers={'Referer': item.extra}).data
filetools.write(ficherosubtitulo, data)
subtitle = ficherosubtitulo
except:
subtitle = ""
logger.error("Error al descargar el subtítulo")
extension = item.url.rsplit("|", 1)[0][-4:]
itemlist.append(['%s %s [directo]' % (extension, item.calidad), item.url, 0, subtitle])
# itemlist.append(item.clone(subtitle=subtitle))
return itemlist

581
kodi/channels/autoplay.py Executable file
View File

@@ -0,0 +1,581 @@
# -*- coding: utf-8 -*-
import os
from core import channeltools
from core import config
from core import jsontools
from core import logger
from core.item import Item
from platformcode import platformtools
__channel__ = "autoplay"
autoplay_node = {}
def context():
'''
Agrega la opcion Configurar AutoPlay al menu contextual
:return:
'''
_context = ""
if config.is_xbmc():
_context = [{"title": "Configurar AutoPlay",
"action": "autoplay_config",
"channel": "autoplay"}]
return _context
context = context()
def show_option(channel, itemlist, text_color='yellow', thumbnail=None, fanart=None):
'''
Agrega la opcion Configurar AutoPlay en la lista recibida
:param channel: str
:param itemlist: list (lista donde se desea integrar la opcion de configurar AutoPlay)
:param text_color: str (color para el texto de la opcion Configurar Autoplay)
:param thumbnail: str (direccion donde se encuentra el thumbnail para la opcion Configurar Autoplay)
:return:
'''
logger.info()
if thumbnail == None:
thumbnail = 'https://s7.postimg.org/65ooga04b/Auto_Play.png'
if fanart == None:
fanart = 'https://s7.postimg.org/65ooga04b/Auto_Play.png'
plot_autoplay = 'AutoPlay permite auto reproducir los enlaces directamente, basándose en la configuracion de tus ' \
'servidores y calidades preferidas. '
itemlist.append(
Item(channel=__channel__,
title="Configurar AutoPlay",
action="autoplay_config",
text_color=text_color,
thumbnail=thumbnail,
fanart=fanart,
plot=plot_autoplay,
from_channel=channel
))
return itemlist
def start(itemlist, item):
'''
Metodo principal desde donde se reproduce automaticamente los enlaces
- En caso la opcion de personalizar activa utilizara las opciones definidas por el usuario.
- En caso contrario intentara reproducir cualquier enlace que cuente con el idioma preferido.
:param itemlist: list (lista de items listos para reproducir, o sea con action='play')
:param item: item (el item principal del canal)
:return: intenta autoreproducir, en caso de fallar devuelve el itemlist que recibio en un principio
'''
logger.info()
global autoplay_node
if not config.is_xbmc():
platformtools.dialog_notification('AutoPlay ERROR', 'Sólo disponible para XBMC/Kodi')
return itemlist
else:
if not autoplay_node:
# Obtiene el nodo AUTOPLAY desde el json
autoplay_node = jsontools.get_node_from_file('autoplay', 'AUTOPLAY')
# Agrega servidores y calidades que no estaban listados a autoplay_node
new_options = check_value(item.channel, itemlist)
# Obtiene el nodo del canal desde autoplay_node
channel_node = autoplay_node.get(item.channel, {})
# Obtiene los ajustes des autoplay para este canal
settings_node = channel_node.get('settings', {})
if settings_node['active']:
url_list_valid = []
autoplay_list = []
favorite_servers = []
favorite_quality = []
# Guarda el valor actual de "Accion al seleccionar vídeo:" en preferencias
user_config_setting = config.get_setting("default_action")
# Habilita la accion "Ver en calidad alta" (si el servidor devuelve más de una calidad p.e. gdrive)
if user_config_setting != 2:
config.set_setting("default_action", 2)
# Informa que AutoPlay esta activo
platformtools.dialog_notification('AutoPlay Activo', '', sound=False)
# Prioridades a la hora de ordenar itemlist:
# 0: Servidores y calidades
# 1: Calidades y servidores
# 2: Solo servidores
# 3: Solo calidades
# 4: No ordenar
if settings_node['custom_servers'] and settings_node['custom_quality']:
priority = settings_node['priority'] # 0: Servidores y calidades o 1: Calidades y servidores
elif settings_node['custom_servers']:
priority = 2 # Solo servidores
elif settings_node['custom_quality']:
priority = 3 # Solo calidades
else:
priority = 4 # No ordenar
# Obtiene las listas servidores, calidades disponibles desde el nodo del json de AutoPlay
server_list = channel_node.get('servers', [])
quality_list = channel_node.get('quality', [])
# Se guardan los textos de cada servidor y calidad en listas p.e. favorite_servers = ['openload',
# 'streamcloud']
for num in range(1, 4):
favorite_servers.append(channel_node['servers'][settings_node['server_%s' % num]])
favorite_quality.append(channel_node['quality'][settings_node['quality_%s' % num]])
# Se filtran los enlaces de itemlist y que se correspondan con los valores de autoplay
for item in itemlist:
autoplay_elem = dict()
# Comprobamos q se trata de un item de video
if 'server' not in item:
continue
# Agrega la opcion configurar AutoPlay al menu contextual
if 'context' not in item:
item.context = list()
if not filter(lambda x: x['action'] == 'autoplay_config', context):
item.context.append({"title": "Configurar AutoPlay",
"action": "autoplay_config",
"channel": "autoplay",
"from_channel": item.channel})
# Si no tiene calidad definida le asigna calidad 'default'
if item.quality == '':
item.quality = 'default'
# Se crea la lista para configuracion personalizada
if priority < 2: # 0: Servidores y calidades o 1: Calidades y servidores
# si el servidor y la calidad no se encuentran en las listas de favoritos o la url esta repetida,
# descartamos el item
if item.server not in favorite_servers or item.quality not in favorite_quality \
or item.url in url_list_valid:
continue
autoplay_elem["indice_server"] = favorite_servers.index(item.server)
autoplay_elem["indice_quality"] = favorite_quality.index(item.quality)
elif priority == 2: # Solo servidores
# si el servidor no se encuentra en la lista de favoritos o la url esta repetida,
# descartamos el item
if item.server not in favorite_servers or item.url in url_list_valid:
continue
autoplay_elem["indice_server"] = favorite_servers.index(item.server)
elif priority == 3: # Solo calidades
# si la calidad no se encuentra en la lista de favoritos o la url esta repetida,
# descartamos el item
if item.quality not in favorite_quality or item.url in url_list_valid:
continue
autoplay_elem["indice_quality"] = favorite_quality.index(item.quality)
else: # No ordenar
# si la url esta repetida, descartamos el item
if item.url in url_list_valid:
continue
# Si el item llega hasta aqui lo añadimos al listado de urls validas y a autoplay_list
url_list_valid.append(item.url)
autoplay_elem['videoitem'] = item
# autoplay_elem['server'] = item.server
# autoplay_elem['quality'] = item.quality
autoplay_list.append(autoplay_elem)
# Ordenamos segun la prioridad
if priority == 0: # Servidores y calidades
autoplay_list.sort(key=lambda orden: (orden['indice_server'], orden['indice_quality']))
elif priority == 1: # Calidades y servidores
autoplay_list.sort(key=lambda orden: (orden['indice_quality'], orden['indice_server']))
elif priority == 2: # Solo servidores
autoplay_list.sort(key=lambda orden: orden['indice_server'])
elif priority == 3: # Solo calidades
autoplay_list.sort(key=lambda orden: orden['indice_quality'])
# Si hay elementos en la lista de autoplay se intenta reproducir cada elemento, hasta encontrar uno
# funcional o fallen todos
if autoplay_list:
played = False
max_intentos = 5
max_intentos_servers = {}
# Si se esta reproduciendo algo detiene la reproduccion
if platformtools.is_playing():
platformtools.stop_video()
for autoplay_elem in autoplay_list:
if not platformtools.is_playing() and not played:
videoitem = autoplay_elem['videoitem']
if videoitem.server not in max_intentos_servers:
max_intentos_servers[videoitem.server] = max_intentos
# Si se han alcanzado el numero maximo de intentos de este servidor saltamos al siguiente
if max_intentos_servers[videoitem.server] == 0:
continue
lang = " "
if hasattr(videoitem, 'language') and videoitem.language != "":
lang = " '%s' " % videoitem.language
platformtools.dialog_notification("AutoPlay", "%s%s%s" % (
videoitem.server.upper(), lang, videoitem.quality.upper()), sound=False)
# TODO videoitem.server es el id del server, pero podria no ser el nombre!!!
# Intenta reproducir los enlaces
# Si el canal tiene metodo play propio lo utiliza
channel = __import__('channels.%s' % item.channel, None, None, ["channels.%s" % item.channel])
if hasattr(channel, 'play'):
resolved_item = getattr(channel, 'play')(videoitem)
if len(resolved_item) > 0:
if isinstance(resolved_item[0], list):
videoitem.video_urls = resolved_item
else:
videoitem = resolved_item[0]
# si no directamente reproduce
platformtools.play_video(videoitem)
try:
if platformtools.is_playing():
played = True
break
except: # TODO evitar el informe de que el conector fallo o el video no se encuentra
logger.debug(str(len(autoplay_list)))
# Si hemos llegado hasta aqui es por q no se ha podido reproducir
max_intentos_servers[videoitem.server] -= 1
# Si se han alcanzado el numero maximo de intentos de este servidor
# preguntar si queremos seguir probando o lo ignoramos
if max_intentos_servers[videoitem.server] == 0:
text = "Parece que los enlaces de %s no estan funcionando." % videoitem.server.upper()
if not platformtools.dialog_yesno("AutoPlay", text,
"¿Desea ignorar todos los enlaces de este servidor?"):
max_intentos_servers[videoitem.server] = max_intentos
else:
platformtools.dialog_notification('AutoPlay No Fue Posible', 'No Hubo Coincidencias')
if new_options:
platformtools.dialog_notification("AutoPlay", "Nueva Calidad/Servidor disponible en la "
"configuracion", sound=False)
# Restaura si es necesario el valor previo de "Accion al seleccionar vídeo:" en preferencias
if user_config_setting != 2:
config.set_setting("default_action", user_config_setting)
# devuelve la lista de enlaces para la eleccion manual
return itemlist
def init(channel, list_servers, list_quality):
'''
Comprueba la existencia de canal en el archivo de configuracion de Autoplay y si no existe lo añade.
Es necesario llamar a esta funcion al entrar a cualquier canal que incluya la funcion Autoplay.
:param channel: (str) id del canal
:param list_servers: (list) lista inicial de servidores validos para el canal. No es necesario incluirlos todos,
ya que la lista de servidores validos se ira actualizando dinamicamente.
:param list_quality: (list) lista inicial de calidades validas para el canal. No es necesario incluirlas todas,
ya que la lista de calidades validas se ira actualizando dinamicamente.
:return: (bool) True si la inicializacion ha sido correcta.
'''
logger.info()
change = False
result = True
if not config.is_xbmc():
platformtools.dialog_notification('AutoPlay ERROR', 'Sólo disponible para XBMC/Kodi')
result = False
else:
autoplay_path = os.path.join(config.get_data_path(), "settings_channels", 'autoplay_data.json')
if os.path.exists(autoplay_path):
autoplay_node = jsontools.get_node_from_file('autoplay', "AUTOPLAY")
else:
change = True
autoplay_node = {"AUTOPLAY": {}}
if channel not in autoplay_node:
change = True
# Se comprueba que no haya calidades ni servidores duplicados
list_servers = list(set(list_servers))
list_quality = list(set(list_quality))
# Creamos el nodo del canal y lo añadimos
channel_node = {"servers": list_servers,
"quality": list_quality,
"settings": {
"active": False,
"custom_servers": False,
"custom_quality": False,
"priority": 0}}
for n in range(1, 4):
s = c = 0
if len(list_servers) >= n:
s = n - 1
if len(list_quality) >= n:
c = n - 1
channel_node["settings"]["server_%s" % n] = s
channel_node["settings"]["quality_%s" % n] = c
autoplay_node[channel] = channel_node
if change:
result, json_data = jsontools.update_node(autoplay_node, 'autoplay', 'AUTOPLAY')
if result:
heading = "AutoPlay Disponible"
msj = "Seleccione '<Configurar AutoPlay>' para activarlo."
icon = 0
else:
heading = "Error al iniciar AutoPlay"
msj = "Consulte su log para obtener mas información."
icon = 1
platformtools.dialog_notification(heading, msj, icon, sound=False)
return result
def check_value(channel, itemlist):
''' comprueba la existencia de un valor en la lista de servidores o calidades
si no existiera los agrega a la lista en el json
:param channel: str
:param values: list (una de servidores o calidades)
:param value_type: str (server o quality)
:return: list
'''
logger.info()
global autoplay_node
change = False
if not autoplay_node:
# Obtiene el nodo AUTOPLAY desde el json
autoplay_node = jsontools.get_node_from_file('autoplay', 'AUTOPLAY')
channel_node = autoplay_node.get(channel)
server_list = channel_node.get('servers')
if not server_list:
server_list = channel_node['servers'] = list()
quality_list = channel_node.get('quality')
if not quality_list:
quality_list = channel_node['quality'] = list()
for item in itemlist:
if item.server not in server_list:
server_list.append(item.server)
change = True
if item.quality not in quality_list:
quality_list.append(item.quality)
change = True
if change:
change, json_data = jsontools.update_node(autoplay_node, 'autoplay', 'AUTOPLAY')
return change
def autoplay_config(item):
logger.info()
global autoplay_node
dict_values = {}
list_controls = []
channel_parameters = channeltools.get_channel_parameters(item.from_channel)
channel_name = channel_parameters['title']
if not autoplay_node:
# Obtiene el nodo AUTOPLAY desde el json
autoplay_node = jsontools.get_node_from_file('autoplay', 'AUTOPLAY')
channel_node = autoplay_node.get(item.from_channel, {})
settings_node = channel_node.get('settings', {})
allow_option = True
active_settings = {"id": "active", "label": "AutoPlay (activar/desactivar la auto-reproduccion)",
"color": "0xffffff99", "type": "bool", "default": False, "enabled": allow_option,
"visible": allow_option}
list_controls.append(active_settings)
dict_values['active'] = settings_node.get('active', False)
# Idioma
status_language = config.get_setting("filter_languages", item.from_channel)
if not status_language:
status_language = 0
set_language = {"id": "language", "label": "Idioma para AutoPlay (Opcional)", "color": "0xffffff99",
"type": "list", "default": 0, "enabled": "eq(-1,true)", "visible": True,
"lvalues": get_languages(item.from_channel)}
list_controls.append(set_language)
dict_values['language'] = status_language
separador = {"id": "label", "label": " "
"_________________________________________________________________________________________",
"type": "label", "enabled": True, "visible": True}
list_controls.append(separador)
# Seccion servidores Preferidos
server_list = channel_node.get("servers", [])
if not server_list:
enabled = False
server_list = ["No disponible"]
else:
enabled = "eq(-3,true)"
custom_servers_settings = {"id": "custom_servers", "label": " Servidores Preferidos", "color": "0xff66ffcc",
"type": "bool", "default": False, "enabled": enabled, "visible": True}
list_controls.append(custom_servers_settings)
if dict_values['active'] and enabled:
dict_values['custom_servers'] = settings_node.get('custom_servers', False)
else:
dict_values['custom_servers'] = False
for num in range(1, 4):
pos1 = num + 3
default = num - 1
if default > len(server_list) - 1:
default = 0
set_servers = {"id": "server_%s" % num, "label": u" \u2665 Servidor Favorito %s" % num,
"color": "0xfffcab14", "type": "list", "default": default,
"enabled": "eq(-%s,true)+eq(-%s,true)" % (pos1, num), "visible": True,
"lvalues": server_list}
list_controls.append(set_servers)
dict_values["server_%s" % num] = settings_node.get("server_%s" % num, 0)
if settings_node.get("server_%s" % num, 0) > len(server_list) - 1:
dict_values["server_%s" % num] = 0
# Seccion Calidades Preferidas
quality_list = channel_node.get("quality", [])
if not quality_list:
enabled = False
quality_list = ["No disponible"]
else:
enabled = "eq(-7,true)"
custom_quality_settings = {"id": "custom_quality", "label": " Calidades Preferidas", "color": "0xff66ffcc",
"type": "bool", "default": False, "enabled": enabled, "visible": True}
list_controls.append(custom_quality_settings)
if dict_values['active'] and enabled:
dict_values['custom_quality'] = settings_node.get('custom_quality', False)
else:
dict_values['custom_quality'] = False
for num in range(1, 4):
pos1 = num + 7
default = num - 1
if default > len(quality_list) - 1:
default = 0
set_quality = {"id": "quality_%s" % num, "label": u" \u2665 Calidad Favorita %s" % num,
"color": "0xfff442d9", "type": "list", "default": default,
"enabled": "eq(-%s,true)+eq(-%s,true)" % (pos1, num), "visible": True,
"lvalues": quality_list}
list_controls.append(set_quality)
dict_values["quality_%s" % num] = settings_node.get("quality_%s" % num, 0)
if settings_node.get("quality_%s" % num, 0) > len(quality_list) - 1:
dict_values["quality_%s" % num] = 0
# Seccion Prioridades
priority_list = ["Servidor y Calidad", "Calidad y Servidor"]
set_priority = {"id": "priority", "label": " Prioridad (Indica el orden para Auto-Reproducir)",
"color": "0xffffff99", "type": "list", "default": 0,
"enabled": True, "visible": "eq(-4,true)+eq(-8,true)+eq(-11,true)", "lvalues": priority_list}
list_controls.append(set_priority)
dict_values["priority"] = settings_node.get("priority", 0)
# Abrir cuadro de dialogo
platformtools.show_channel_settings(list_controls=list_controls, dict_values=dict_values, callback='save',
item=item, caption='%s - AutoPlay' % channel_name)
def save(item, dict_data_saved):
'''
Guarda los datos de la ventana de configuracion
:param item: item
:param dict_data_saved: dict
:return:
'''
logger.info()
global autoplay_node
if not autoplay_node:
# Obtiene el nodo AUTOPLAY desde el json
autoplay_node = jsontools.get_node_from_file('autoplay', 'AUTOPLAY')
channel_node = autoplay_node.get(item.from_channel)
config.set_setting("filter_languages", dict_data_saved.pop("language"), item.from_channel)
channel_node['settings'] = dict_data_saved
result, json_data = jsontools.update_node(autoplay_node, 'autoplay', 'AUTOPLAY')
return result
def get_languages(channel):
'''
Obtiene los idiomas desde el json del canal
:param channel: str
:return: list
'''
logger.info()
list_language = ['No filtrar']
list_controls, dict_settings = channeltools.get_channel_controls_settings(channel)
for control in list_controls:
if control["id"] == 'filter_languages':
list_language = control["lvalues"]
return list_language
def is_active():
'''
Devuelve un booleano q indica si esta activo o no autoplay en el canal desde el que se llama
:return: True si esta activo autoplay para el canal desde el que se llama, False en caso contrario.
'''
logger.info()
global autoplay_node
if not config.is_xbmc():
return False
if not autoplay_node:
# Obtiene el nodo AUTOPLAY desde el json
autoplay_node = jsontools.get_node_from_file('autoplay', 'AUTOPLAY')
# Obtine el canal desde el q se hace la llamada
import inspect
module = inspect.getmodule(inspect.currentframe().f_back)
canal = module.__name__.split('.')[1]
logger.debug(canal)
# Obtiene el nodo del canal desde autoplay_node
channel_node = autoplay_node.get(canal, {})
# Obtiene los ajustes des autoplay para este canal
settings_node = channel_node.get('settings', {})
return settings_node.get('active', False)

37
kodi/channels/bajui.json Executable file
View File

@@ -0,0 +1,37 @@
{
"id": "bajui",
"name": "Bajui",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "bajui.png",
"banner": "bajui.png",
"fanart": "bajui.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"movie",
"tvshow",
"documentary",
"vos"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
}
]
}

270
kodi/channels/bajui.py Executable file
View File

@@ -0,0 +1,270 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, title="Películas", action="menupeliculas",
url="http://www.bajui.com/descargas/categoria/2/peliculas",
fanart=item.fanart))
itemlist.append(Item(channel=item.channel, title="Series", action="menuseries",
fanart=item.fanart))
itemlist.append(Item(channel=item.channel, title="Documentales", action="menudocumentales",
fanart=item.fanart))
itemlist.append(Item(channel=item.channel, title="Buscar", action="search",
fanart=item.fanart))
return itemlist
def menupeliculas(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, title="Películas - Novedades", action="peliculas", url=item.url,
fanart=item.fanart, viewmode="movie_with_plot"))
itemlist.append(
Item(channel=item.channel, title="Películas - A-Z", action="peliculas", url=item.url + "/orden:nombre",
fanart=item.fanart, viewmode="movie_with_plot"))
# <ul class="submenu2 subcategorias"><li ><a href="/descargas/subcategoria/4/br-scr-dvdscr">BR-Scr / DVDScr</a></li><li ><a href="/descargas/subcategoria/6/dvdr-full">DVDR - Full</a></li><li ><a href="/descargas/subcategoria/1/dvdrip-vhsrip">DVDRip / VHSRip</a></li><li ><a href="/descargas/subcategoria/3/hd">HD</a></li><li ><a href="/descargas/subcategoria/2/hdrip-bdrip">HDRip / BDRip</a></li><li ><a href="/descargas/subcategoria/35/latino">Latino</a></li><li ><a href="/descargas/subcategoria/5/ts-scr-cam">TS-Scr / CAM</a></li><li ><a href="/descargas/subcategoria/7/vos">VOS</a></li></ul>
data = scrapertools.cache_page(item.url)
data = scrapertools.get_match(data, '<ul class="submenu2 subcategorias">(.*?)</ul>')
patron = '<a href="([^"]+)">([^<]+)</a>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title in matches:
scrapedurl = urlparse.urljoin(item.url, url)
itemlist.append(Item(channel=item.channel, title="Películas en " + title, action="peliculas", url=scrapedurl,
fanart=item.fanart, viewmode="movie_with_plot"))
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url="", fanart=item.fanart))
return itemlist
def menuseries(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, title="Series - Novedades", action="peliculas",
url="http://www.bajui.com/descargas/categoria/3/series",
fanart=item.fanart, viewmode="movie_with_plot"))
itemlist.append(Item(channel=item.channel, title="Series - A-Z", action="peliculas",
url="http://www.bajui.com/descargas/categoria/3/series/orden:nombre",
fanart=item.fanart, viewmode="movie_with_plot"))
itemlist.append(Item(channel=item.channel, title="Series - HD", action="peliculas",
url="http://www.bajui.com/descargas/subcategoria/11/hd/orden:nombre",
fanart=item.fanart, viewmode="movie_with_plot"))
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url="",
fanart=item.fanart))
return itemlist
def menudocumentales(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, title="Documentales - Novedades", action="peliculas",
url="http://www.bajui.com/descargas/categoria/7/docus-y-tv",
fanart=item.fanart, viewmode="movie_with_plot"))
itemlist.append(Item(channel=item.channel, title="Documentales - A-Z", action="peliculas",
url="http://www.bajui.com/descargas/categoria/7/docus-y-tv/orden:nombre",
fanart=item.fanart, viewmode="movie_with_plot"))
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url="",
fanart=item.fanart))
return itemlist
# Al llamarse "search" la función, el launcher pide un texto a buscar y lo añade como parámetro
def search(item, texto, categoria=""):
logger.info(item.url + " search " + texto)
itemlist = []
url = item.url
texto = texto.replace(" ", "+")
logger.info("categoria: " + categoria + " url: " + url)
try:
item.url = "http://www.bajui.com/descargas/busqueda/%s"
item.url = item.url % texto
itemlist.extend(peliculas(item))
return itemlist
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def peliculas(item, paginacion=True):
logger.info()
url = item.url
# Descarga la página
data = scrapertools.cache_page(url)
patron = '<li id="ficha-\d+" class="ficha2[^<]+'
patron += '<div class="detalles-ficha"[^<]+'
patron += '<span class="nombre-det">Ficha\: ([^<]+)</span>[^<]+'
patron += '<span class="categoria-det">[^<]+</span>[^<]+'
patron += '<span class="descrip-det">(.*?)</span>[^<]+'
patron += '</div>.*?<a href="([^"]+)"[^<]+'
patron += '<img src="([^"]+)"'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for title, plot, url, thumbnail in matches:
scrapedtitle = title
scrapedplot = clean_plot(plot)
scrapedurl = urlparse.urljoin(item.url, url)
scrapedthumbnail = urlparse.urljoin("http://www.bajui.com/", thumbnail.replace("_m.jpg", "_g.jpg"))
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
# Añade al listado de XBMC
itemlist.append(
Item(channel=item.channel, action="enlaces", title=scrapedtitle, fulltitle=title, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, extra=scrapedtitle, context="4|5",
fanart=item.fanart, viewmode="movie_with_plot"))
# Extrae el paginador
patron = '<a href="([^"]+)" class="pagina pag_sig">Siguiente \&raquo\;</a>'
matches = re.compile(patron, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
if len(matches) > 0:
scrapedurl = urlparse.urljoin("http://www.bajui.com/", matches[0])
pagitem = Item(channel=item.channel, action="peliculas", title=">> Página siguiente", url=scrapedurl,
fanart=item.fanart, viewmode="movie_with_plot")
if not paginacion:
itemlist.extend(peliculas(pagitem))
else:
itemlist.append(pagitem)
return itemlist
def clean_plot(scrapedplot):
scrapedplot = scrapedplot.replace("\n", "").replace("\r", "")
scrapedplot = re.compile("TÍTULO ORIGINAL[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("AÑO[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Año[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("DURACIÓN[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Duración[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("PAIS[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("PAÍS[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Pais[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("País[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("DIRECTOR[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("DIRECCIÓN[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Dirección[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("REPARTO[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Reparto[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Interpretación[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("GUIÓN[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Guión[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("MÚSICA[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Música[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("FOTOGRAFÍA[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Fotografía[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("PRODUCTORA[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Producción[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Montaje[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Vestuario[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("GÉNERO[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("GENERO[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Genero[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Género[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("PREMIOS[^<]+<br />", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("SINOPSIS", re.DOTALL).sub("", scrapedplot)
scrapedplot = re.compile("Sinopsis", re.DOTALL).sub("", scrapedplot)
scrapedplot = scrapertools.htmlclean(scrapedplot)
return scrapedplot
def enlaces(item):
logger.info()
itemlist = []
data = scrapertools.cache_page(item.url)
try:
item.plot = scrapertools.get_match(data, '<span class="ficha-descrip">(.*?)</span>')
item.plot = clean_plot(item.plot)
except:
pass
try:
item.thumbnail = scrapertools.get_match(data, '<div class="ficha-imagen"[^<]+<img src="([^"]+)"')
item.thumbnail = urlparse.urljoin("http://www.bajui.com/", item.thumbnail)
except:
pass
'''
<div id="enlaces-34769"><img id="enlaces-cargando-34769" src="/images/cargando.gif" style="display:none;"/></div>
</li><li id="box-enlace-330690" class="box-enlace">
<div class="box-enlace-cabecera">
<div class="datos-usuario"><img class="avatar" src="images/avatars/116305_p.jpg" />Enlaces de:
<a class="nombre-usuario" href="/usuario/jerobien">jerobien</a> </div>
<div class="datos-act">Actualizado: Hace 8 minutos</div>
<div class="datos-boton-mostrar"><a id="boton-mostrar-330690" class="boton" href="javascript:mostrar_enlaces(330690,'b01de63028139fdd348d');">Mostrar enlaces</a></div>
<div class="datos-servidores"><div class="datos-servidores-cell"><img src="/images/servidores/ul.to.png" title="uploaded.com" border="0" alt="uploaded.com" /><img src="/images/servidores/bitshare.png" title="bitshare.com" border="0" alt="bitshare.com" /><img src="/images/servidores/freakshare.net.jpg" title="freakshare.com" border="0" alt="freakshare.com" /><img src="/images/servidores/letitbit.png" title="letitbit.net" border="0" alt="letitbit.net" /><img src="/images/servidores/turbobit.png" title="turbobit.net" border="0" alt="turbobit.net" /><img src="/images/servidores/rapidgator.png" title="rapidgator.net" border="0" alt="rapidgator.net" /><img src="/images/servidores/cloudzer.png" title="clz.to" border="0" alt="clz.to" /></div></div>
</div>
'''
patron = '<div class="box-enlace-cabecera"[^<]+'
patron += '<div class="datos-usuario"><img class="avatar" src="([^"]+)" />Enlaces[^<]+'
patron += '<a class="nombre-usuario" href="[^"]+">([^<]+)</a[^<]+</div>[^<]+'
patron += '<div class="datos-act">Actualizado. ([^<]+)</div>.*?'
patron += '<div class="datos-boton-mostrar"><a id="boton-mostrar-\d+" class="boton" href="javascript.mostrar_enlaces\((\d+)\,\'([^\']+)\'[^>]+>Mostrar enlaces</a></div>[^<]+'
patron += '<div class="datos-servidores"><div class="datos-servidores-cell">(.*?)</div></div>'
matches = re.compile(patron, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
logger.debug("matches=" + repr(matches))
for thumbnail, usuario, fecha, id, id2, servidores in matches:
# <img src="/images/servidores/bitshare.png" title="bitshare.com" border="0" alt="bitshare.com" /><img src="/images/servidores/freakshare.net.jpg" title="freakshare.com" border="0" alt="freakshare.com" /><img src="/images/servidores/rapidgator.png" title="rapidgator.net" border="0" alt="rapidgator.net" /><img src="/images/servidores/turbobit.png" title="turbobit.net" border="0" alt="turbobit.net" /><img src="/images/servidores/muchshare.png" title="muchshare.net" border="0" alt="muchshare.net" /><img src="/images/servidores/letitbit.png" title="letitbit.net" border="0" alt="letitbit.net" /><img src="/images/servidores/shareflare.png" title="shareflare.net" border="0" alt="shareflare.net" /><img src="/images/servidores/otros.gif" title="Otros servidores" border="0" alt="Otros" />
patronservidores = '<img src="[^"]+" title="([^"]+)"'
matches2 = re.compile(patronservidores, re.DOTALL).findall(servidores)
lista_servidores = ""
for servidor in matches2:
lista_servidores = lista_servidores + servidor + ", "
lista_servidores = lista_servidores[:-2]
scrapedthumbnail = item.thumbnail
# http://www.bajui.com/ajax/mostrar-enlaces.php?id=330582&code=124767d31bfbf14c3861
scrapedurl = "http://www.bajui.com/ajax/mostrar-enlaces.php?id=" + id + "&code=" + id2
scrapedplot = item.plot
scrapedtitle = "Enlaces de " + usuario + " (" + fecha + ") (" + lista_servidores + ")"
itemlist.append(
Item(channel=item.channel, action="findvideos", title=scrapedtitle, fulltitle=item.title, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, context="4|5",
fanart=item.fanart))
return itemlist
def findvideos(item):
logger.info()
data = scrapertools.cache_page(item.url)
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
videoitem.channel = item.channel
videoitem.plot = item.plot
videoitem.thumbnail = item.thumbnail
videoitem.fulltitle = item.fulltitle
try:
parsed_url = urlparse.urlparse(videoitem.url)
fichero = parsed_url.path
partes = fichero.split("/")
titulo = partes[len(partes) - 1]
videoitem.title = titulo + " - [" + videoitem.server + "]"
except:
videoitem.title = item.title
return itemlist

37
kodi/channels/beeg.json Executable file
View File

@@ -0,0 +1,37 @@
{
"id": "beeg",
"name": "Beeg",
"active": true,
"adult": true,
"language": "es",
"thumbnail": "beeg.png",
"banner": "beeg.png",
"version": 1,
"changes": [
{
"date": "03/06/2017",
"description": "reliminado encoding y soporte multiples calidades"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"adult"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}

140
kodi/channels/beeg.py Executable file
View File

@@ -0,0 +1,140 @@
# -*- coding: utf-8 -*-
import re
import urllib
from core import jsontools as json
from core import logger
from core import scrapertools
from core.item import Item
url_api = ""
beeg_salt = ""
def get_api_url():
global url_api
global beeg_salt
data = scrapertools.downloadpage("http://beeg.com")
version = re.compile('<script src="//static.beeg.com/cpl/([\d]+).js"').findall(data)[0]
js_url = "http:" + re.compile('<script src="(//static.beeg.com/cpl/[\d]+.js)"').findall(data)[0]
url_api = "https://api2.beeg.com/api/v6/" + version
data = scrapertools.downloadpage(js_url)
beeg_salt = re.compile('beeg_salt="([^"]+)"').findall(data)[0]
def decode(key):
a = beeg_salt
e = unicode(urllib.unquote(key), "utf8")
t = len(a)
o = ""
for n in range(len(e)):
r = ord(e[n:n + 1])
i = n % t
s = ord(a[i:i + 1]) % 21
o += chr(r - s)
n = []
for x in range(len(o), 0, -3):
if x >= 3:
n.append(o[(x - 3):x])
else:
n.append(o[0:x])
return "".join(n)
get_api_url()
def mainlist(item):
logger.info()
get_api_url()
itemlist = []
itemlist.append(Item(channel=item.channel, action="videos", title="Útimos videos", url=url_api + "/index/main/0/pc",
viewmode="movie"))
itemlist.append(Item(channel=item.channel, action="listcategorias", title="Listado categorias",
url=url_api + "/index/main/0/pc"))
itemlist.append(
Item(channel=item.channel, action="search", title="Buscar", url=url_api + "/index/search/0/pc?query=%s"))
return itemlist
def videos(item):
logger.info()
itemlist = []
data = scrapertools.cache_page(item.url)
JSONData = json.load(data)
for Video in JSONData["videos"]:
thumbnail = "http://img.beeg.com/236x177/" + Video["id"] + ".jpg"
url = url_api + "/video/" + Video["id"]
title = Video["title"]
itemlist.append(
Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail, plot="", show="",
folder=True))
# Paginador
Actual = int(scrapertools.get_match(item.url, url_api + '/index/[^/]+/([0-9]+)/pc'))
if JSONData["pages"] - 1 > Actual:
scrapedurl = item.url.replace("/" + str(Actual) + "/", "/" + str(Actual + 1) + "/")
itemlist.append(
Item(channel=item.channel, action="videos", title="Página Siguiente", url=scrapedurl, thumbnail="",
folder=True, viewmode="movie"))
return itemlist
def listcategorias(item):
logger.info()
itemlist = []
data = scrapertools.cache_page(item.url)
JSONData = json.load(data)
for Tag in JSONData["tags"]["popular"]:
url = url_api + "/index/tag/0/pc?tag=" + Tag
title = Tag
title = title[:1].upper() + title[1:]
itemlist.append(
Item(channel=item.channel, action="videos", title=title, url=url, folder=True, viewmode="movie"))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = item.url % (texto)
try:
return videos(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def play(item):
logger.info()
itemlist = []
data = scrapertools.downloadpage(item.url)
JSONData = json.load(data)
for key in JSONData:
videourl = re.compile("([0-9]+p)", re.DOTALL).findall(key)
if videourl:
videourl = videourl[0]
if not JSONData[videourl] == None:
url = JSONData[videourl]
url = url.replace("{DATA_MARKERS}", "data=pc.ES")
viedokey = re.compile("key=(.*?)%2Cend=", re.DOTALL).findall(url)[0]
url = url.replace(viedokey, decode(viedokey))
if not url.startswith("https:"): url = "https:" + url
title = videourl
itemlist.append(["%s %s [directo]" % (title, url[-4:]), url])
itemlist.sort(key=lambda item: item[0])
return itemlist

35
kodi/channels/bityouth.json Executable file
View File

@@ -0,0 +1,35 @@
{
"id": "bityouth",
"name": "Bityouth",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://s6.postimg.org/6ash180up/bityoulogo.png",
"banner": "bityouth.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"torrent",
"movie",
"tvshow"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
}
]
}

1762
kodi/channels/bityouth.py Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,42 @@
{
"id": "borrachodetorrent",
"name": "BorrachodeTorrent",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://imgur.com/BePrYmy.png",
"version": 1,
"changes": [
{
"date": "26/04/2017",
"description": "Release"
},
{
"date": "28/06/2017",
"description": "Correciones código y mejoras"
}
],
"categories": [
"torrent",
"movie",
"tvshow"
],
"settings": [
{
"id": "modo_grafico",
"type": "bool",
"label": "Buscar información extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}

1048
kodi/channels/borrachodetorrent.py Executable file

File diff suppressed because it is too large Load Diff

35
kodi/channels/bricocine.json Executable file
View File

@@ -0,0 +1,35 @@
{
"id": "bricocine",
"name": "Bricocine",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://s6.postimg.org/9u8m1ep8x/bricocine.jpg",
"banner": "bricocine.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"torrent",
"movie",
"tvshow"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}

2308
kodi/channels/bricocine.py Executable file

File diff suppressed because it is too large Load Diff

23
kodi/channels/canalporno.json Executable file
View File

@@ -0,0 +1,23 @@
{
"id": "canalporno",
"name": "Canalporno",
"active": true,
"adult": true,
"language": "es",
"thumbnail": "http://i.imgur.com/gAbPcvT.png?1",
"banner": "canalporno.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "09/01/2017",
"description": "Primera version."
}
],
"categories": [
"adult"
]
}

86
kodi/channels/canalporno.py Executable file
View File

@@ -0,0 +1,86 @@
# -*- coding: utf-8 -*-
from core import httptools
from core import logger
from core import scrapertools
host = "http://www.canalporno.com"
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(action="findvideos", title="Útimos videos", url=host))
itemlist.append(item.clone(action="categorias", title="Listado Categorias",
url=host + "/categorias"))
itemlist.append(item.clone(action="search", title="Buscar", url=host + "/search/?q=%s"))
return itemlist
def search(item, texto):
logger.info()
try:
item.url = item.url % texto
itemlist = findvideos(item)
return sorted(itemlist, key=lambda it: it.title)
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def findvideos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<img src="([^"]+)".*?alt="([^"]+)".*?<h2><a href="([^"]+)">.*?' \
'<div class="duracion"><span class="ico-duracion sprite"></span> ([^"]+) min</div>'
matches = scrapertools.find_multiple_matches(data, patron)
for thumbnail, title, url, time in matches:
scrapedtitle = time + " - " + title
scrapedurl = host + url
scrapedthumbnail = "http:" + thumbnail
itemlist.append(item.clone(action="play", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail))
patron = '<div class="paginacion">.*?<span class="selected">.*?<a href="([^"]+)">([^"]+)</a>'
matches = scrapertools.find_multiple_matches(data, patron)
for url, title in matches:
url = host + url
title = "Página %s" % title
itemlist.append(item.clone(action="findvideos", title=title, url=url))
return itemlist
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
bloque = scrapertools.find_single_match(data, '<ul class="ordenar-por ordenar-por-categoria">'
'(.*?)<div class="publis-bottom">')
patron = '<div class="muestra-categorias">.*?<a class="thumb" href="([^"]+)".*?<img class="categorias" src="([^"]+)".*?<div class="nombre">([^"]+)</div>'
matches = scrapertools.find_multiple_matches(bloque, patron)
for url, thumbnail, title in matches:
url = host + url
thumbnail = "http:" + thumbnail
itemlist.append(item.clone(action="findvideos", title=title, url=url, thumbnail=thumbnail))
return itemlist
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
url = "http:" + scrapertools.find_single_match(data, '<source src="([^"]+)"')
itemlist.append(item.clone(url=url, server="directo"))
return itemlist

View File

@@ -0,0 +1,20 @@
{
"id": "cartoonlatino",
"name": "Cartoon-Latino",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://i.imgur.com/wk6fRDZ.png",
"banner": "http://i.imgur.com/115c59F.png",
"version": 1,
"changes": [
{
"date": "07/06/2017",
"description": "Primera version del canal"
}
],
"categories": [
"tvshow",
"latino"
]
}

204
kodi/channels/cartoonlatino.py Executable file
View File

@@ -0,0 +1,204 @@
# -*- coding: utf-8 -*-
import re
from channels import renumbertools
from channelselector import get_thumb
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core import tmdb
from core.item import Item
host = "http://www.cartoon-latino.com/"
def mainlist(item):
logger.info()
thumb_series = get_thumb("thumb_channels_tvshow.png")
thumb_series_az = get_thumb("thumb_channels_tvshow_az.png")
itemlist = list()
itemlist.append(Item(channel=item.channel, action="lista", title="Series", url=host,
thumbnail=thumb_series))
itemlist = renumbertools.show_option(item.channel, itemlist)
return itemlist
"""
def search(item, texto):
logger.info()
texto = texto.replace(" ","+")
item.url = item.url+texto
if texto!='':
return lista(item)
"""
def lista_gen(item):
logger.info()
itemlist = []
data1 = httptools.downloadpage(item.url).data
data1 = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data1)
patron_sec = '<section class="content">.+?<\/section>'
data = scrapertools.find_single_match(data1, patron_sec)
patron = '<article id=.+? class=.+?><div.+?>'
patron += '<a href="([^"]+)" title="([^"]+)' # scrapedurl, # scrapedtitle
patron += ' Capítulos Completos ([^"]+)">' # scrapedlang
patron += '<img.+? data-src=.+? data-lazy-src="([^"]+)"' # scrapedthumbnail
matches = scrapertools.find_multiple_matches(data, patron)
i = 0
for scrapedurl, scrapedtitle, scrapedlang, scrapedthumbnail in matches:
i = i + 1
if 'HD' in scrapedlang:
scrapedlang = scrapedlang.replace('HD', '')
title = scrapedtitle + " [ " + scrapedlang + "]"
itemlist.append(
Item(channel=item.channel, title=title, url=scrapedurl, thumbnail=scrapedthumbnail, action="episodios",
show=scrapedtitle, context=renumbertools.context(item)))
tmdb.set_infoLabels(itemlist)
# Paginacion
patron_pag = '<a class="nextpostslink" rel="next" href="([^"]+)">'
next_page_url = scrapertools.find_single_match(data, patron_pag)
if next_page_url != "" and i != 1:
item.url = next_page_url
itemlist.append(Item(channel=item.channel, action="lista_gen", title=">> Página siguiente", url=next_page_url,
thumbnail='https://s32.postimg.org/4zppxf5j9/siguiente.png'))
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
data_lista = scrapertools.find_single_match(data, '<div class="su-list su-list-style-"><ul>(.+?)<\/ul><\/div>')
patron = "<a href='(.+?)'>(.+?)<\/a>"
matches = scrapertools.find_multiple_matches(data_lista, patron)
for link, name in matches:
title = name + " [Latino]"
url = link
itemlist.append(
item.clone(title=title, url=url, plot=title, action="episodios", show=title,
context=renumbertools.context(item)))
tmdb.set_infoLabels(itemlist)
return itemlist
def episodios(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
data_lista = scrapertools.find_single_match(data,
'<div class="su-list su-list-style-"><ulclass="lista-capitulos">.+?<\/div><\/p>')
if '&#215;' in data_lista:
data_lista = data_lista.replace('&#215;', 'x')
show = item.title
if "[Latino]" in show:
show = show.replace("[Latino]", "")
if "Ranma" in show:
patron_caps = '<\/i> <strong>.+?Capitulo ([^"]+)\: <a .+? href="([^"]+)">([^"]+)<\/a>'
else:
patron_caps = '<\/i> <strong>Capitulo ([^"]+)x.+?\: <a .+? href="([^"]+)">([^"]+)<\/a>'
matches = scrapertools.find_multiple_matches(data_lista, patron_caps)
scrapedplot = scrapertools.find_single_match(data, '<strong>Sinopsis<\/strong><strong>([^"]+)<\/strong><\/pre>')
number = 0
ncap = 0
A = 1
for temp, link, name in matches:
if A != temp:
number = 0
if "Ranma" in show:
number = int(temp)
temp = str(1)
else:
number = number + 1
if number < 10:
capi = "0" + str(number)
else:
capi = str(number)
if "Ranma" in show:
season = 1
episode = number
season, episode = renumbertools.numbered_for_tratk(
item.channel, item.show, season, episode)
date = name
if episode < 10:
capi = "0" + str(episode)
else:
capi = episode
title = str(season) + "x" + str(capi) + " - " + name # "{0}x{1} - ({2})".format(season, episode, date)
else:
title = str(temp) + "x" + capi + " - " + name
url = link
A = temp
itemlist.append(Item(channel=item.channel, action="findvideos", title=title, url=url, show=show))
if config.get_videolibrary_support() and len(itemlist) > 0:
itemlist.append(Item(channel=item.channel, title="Añadir " + show + " a la videoteca", url=item.url,
action="add_serie_to_library", extra="episodios", show=show))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
data_function = scrapertools.find_single_match(data, '<!\[CDATA\[function (.+?)\]\]')
data_id = scrapertools.find_single_match(data,
"<script>\(adsbygoogle = window\.adsbygoogle \|\| \[\]\)\.push\({}\);<\/script><\/div><br \/>(.+?)<\/ins>")
itemla = scrapertools.find_multiple_matches(data_function, "src='(.+?)'")
serverid = scrapertools.find_multiple_matches(data_id, '<script>([^"]+)\("([^"]+)"\)')
for server, id in serverid:
for link in itemla:
if server in link:
url = link.replace('" + ID' + server + ' + "', str(id))
if "drive" in server:
server1 = 'googlevideo'
else:
server1 = server
itemlist.append(item.clone(url=url, action="play", server=server1,
title="Enlace encontrado en %s " % (server1.capitalize())))
return itemlist
def play(item):
logger.info()
itemlist = []
# Buscamos video por servidor ...
devuelve = servertools.findvideosbyserver(item.url, item.server)
if not devuelve:
# ...sino lo encontramos buscamos en todos los servidores disponibles
devuelve = servertools.findvideos(item.url, skip=True)
if devuelve:
# logger.debug(devuelve)
itemlist.append(Item(channel=item.channel, title=item.contentTitle, action="play", server=devuelve[0][2],
url=devuelve[0][1], thumbnail=item.thumbnail, folder=False))
return itemlist

View File

@@ -0,0 +1,37 @@
{
"id": "ciberdocumentales",
"name": "CiberDocumentales",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "https://s9.postimg.org/secdb5s8v/ciberdocumentales.png",
"banner": "https://s1.postimg.org/sa486z0of/ciberdocumentales_banner.png",
"version": 1,
"changes": [
{
"date": "18/06/2016",
"descripcion": "First release"
}
],
"categories": [
"documentary"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_documentales",
"type": "bool",
"label": "Incluir en Novedades - Documentales",
"default": true,
"enabled": true,
"visible": true
}
]
}

View File

@@ -0,0 +1,122 @@
# -*- coding: utf-8 -*-
import re
from core import httptools
from core import logger
from core import scrapertools
from core import tmdb
from core.item import Item
host = 'http://www.ciberdocumentales.com'
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Todas", action="lista", thumbnail='https://s18.postimg.org/fwvaeo6qh/todas.png',
fanart='https://s18.postimg.org/fwvaeo6qh/todas.png', url=host))
itemlist.append(Item(channel=item.channel, title="Generos", action="generos", url=host,
thumbnail='https://s3.postimg.org/5s9jg2wtf/generos.png',
fanart='https://s3.postimg.org/5s9jg2wtf/generos.png'))
itemlist.append(Item(channel=item.channel, title="Mas Vistas", action="lista", url=host,
thumbnail='https://s9.postimg.org/wmhzu9d7z/vistas.png',
fanart='https://s9.postimg.org/wmhzu9d7z/vistas.png', extra='masvistas'))
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=host,
thumbnail='https://s30.postimg.org/pei7txpa9/buscar.png',
fanart='https://s30.postimg.org/pei7txpa9/buscar.png'))
return itemlist
def lista(item):
logger.info()
itemlist = []
if item.extra == 'buscar':
data = httptools.downloadpage(host + '/index.php?' + 'categoria=0&keysrc=' + item.text).data
else:
data = httptools.downloadpage(item.url).data
data = re.sub(r'"|\n|\r|\t|&nbsp;|<br>|\s{2,}', "", data)
if item.extra == 'masvistas':
patron = '<div class=bloquecenmarcado><a title=.*? target=_blank href=(.*?) class=game><img src=(.*?) alt=(.*?) title= class=bloquecenimg \/>.*?<strong>(.*?)<\/strong>'
else:
patron = '<div class=fotonoticia><a.*?target=_blank href=(.*?)><img src=(.*?) alt=(.*?) \/>.*?class=textonoticia>.*?\/><br \/>(.*?)<\/div>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedplot in matches:
url = host + scrapedurl
thumbnail = host + scrapedthumbnail
plot = scrapertools.htmlclean(scrapedplot)
plot = plot.decode('iso8859-1').encode('utf-8')
contentTitle = scrapedtitle
title = contentTitle
title = title.decode('iso8859-1').encode('utf-8')
fanart = ''
itemlist.append(
Item(channel=item.channel, action='findvideos', title=title, url=url, thumbnail=thumbnail, plot=plot,
fanart=fanart, contentTitle=contentTitle))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
# Paginacion
if itemlist != []:
actual_page_url = item.url
next_page = scrapertools.find_single_match(data, 'class=current>.*?<\/span><a href=(.*?)>.*?<\/a>')
if next_page != '' and item.extra != 'masvistas':
itemlist.append(Item(channel=item.channel, action="lista", title='Siguiente >>>', url=host + next_page,
thumbnail='https://s16.postimg.org/9okdu7hhx/siguiente.png'))
return itemlist
def generos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r'"|\n|\r|\t|&nbsp;|<br>|\s{2,}', "", data)
patron = '<a style=text-transform:capitalize; href=(.*?)\/>(.*?)<\/a><\/span><\/li>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
thumbnail = ''
fanart = ''
title = scrapedtitle
url = host + scrapedurl
itemlist.append(
Item(channel=item.channel, action="lista", title=title, fulltitle=item.title, url=url, thumbnail=thumbnail,
fanart=fanart))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.text = texto
item.extra = 'buscar'
if texto != '':
return lista(item)
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == 'documentales':
item.url = host
itemlist = lista(item)
if itemlist[-1].title == 'Siguiente >>>':
itemlist.pop()
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist

View File

@@ -0,0 +1,60 @@
{
"id": "cineasiaenlinea",
"name": "CineAsiaEnLinea",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://i.imgur.com/5KOU8uy.png?3",
"banner": "cineasiaenlinea.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "07/02/17",
"description": "Fix bug in newest"
},
{
"date": "09/01/2017",
"description": "Primera version"
}
],
"categories": [
"movie",
"vos"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en búsqueda global",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Películas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 3,
"enabled": true,
"visible": true,
"lvalues": [
"Sin color",
"Perfil 3",
"Perfil 2",
"Perfil 1"
]
}
]
}

166
kodi/channels/cineasiaenlinea.py Executable file
View File

@@ -0,0 +1,166 @@
# -*- coding: utf-8 -*-
import re
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
host = "http://www.cineasiaenlinea.com/"
# Configuracion del canal
__perfil__ = int(config.get_setting('perfil', 'cineasiaenlinea'))
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE']]
if __perfil__ - 1 >= 0:
color1, color2, color3 = perfil[__perfil__ - 1]
else:
color1 = color2 = color3 = ""
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(action="peliculas", title="Novedades", url=host + "archivos/peliculas",
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Directors%20Chair.png", text_color=color1))
itemlist.append(item.clone(action="peliculas", title="Estrenos", url=host + "archivos/estrenos",
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Directors%20Chair.png", text_color=color1))
itemlist.append(item.clone(action="indices", title="Por géneros", url=host,
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Genre.png", text_color=color1))
itemlist.append(item.clone(action="indices", title="Por país", url=host, text_color=color1))
itemlist.append(item.clone(action="indices", title="Por año", url=host, text_color=color1))
itemlist.append(item.clone(title="", action=""))
itemlist.append(item.clone(action="search", title="Buscar...", text_color=color3))
itemlist.append(item.clone(action="configuracion", title="Configurar canal...", text_color="gold", folder=False))
return itemlist
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
platformtools.itemlist_refresh()
return ret
def search(item, texto):
logger.info()
item.url = "%s?s=%s" % (host, texto.replace(" ", "+"))
try:
return peliculas(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == 'peliculas':
item.url = host + "archivos/peliculas"
item.action = "peliculas"
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":
itemlist.pop()
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def peliculas(item):
logger.info()
itemlist = []
item.text_color = color2
# Descarga la página
data = httptools.downloadpage(item.url).data
patron = '<h3><a href="([^"]+)">([^<]+)<.*?src="([^"]+)".*?<a rel="tag">([^<]+)<' \
'.*?<a rel="tag">([^<]+)<'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, year, calidad in matches:
title = re.sub(r' \((\d+)\)', '', scrapedtitle)
scrapedtitle += " [%s]" % calidad
infolab = {'year': year}
itemlist.append(item.clone(action="findvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, infoLabels=infolab,
contentTitle=title, contentType="movie"))
next_page = scrapertools.find_single_match(data, '<a class="nextpostslink" href="([^"]+)"')
if next_page:
itemlist.append(item.clone(title=">> Página Siguiente", url=next_page))
return itemlist
def indices(item):
logger.info()
itemlist = []
# Descarga la página
data = httptools.downloadpage(item.url).data
logger.info(data)
if "géneros" in item.title:
bloque = scrapertools.find_single_match(data, '(?i)<h4>Peliculas por genero</h4>(.*?)</ul>')
matches = scrapertools.find_multiple_matches(bloque, '<a href="([^"]+)".*?>([^<]+)<')
elif "año" in item.title:
bloque = scrapertools.find_single_match(data, '(?i)<h4>Peliculas por Año</h4>(.*?)</select>')
matches = scrapertools.find_multiple_matches(bloque, '<option value="([^"]+)">([^<]+)<')
else:
bloque = scrapertools.find_single_match(data, '(?i)<h4>Peliculas por Pais</h4>(.*?)</ul>')
matches = scrapertools.find_multiple_matches(bloque, '<a href="([^"]+)".*?>([^<]+)<')
for scrapedurl, scrapedtitle in matches:
if "año" in item.title:
scrapedurl = "%sfecha-estreno/%s" % (host, scrapedurl)
itemlist.append(Item(channel=item.channel, action="peliculas", title=scrapedtitle, url=scrapedurl,
thumbnail=item.thumbnail, text_color=color3))
return itemlist
def findvideos(item):
logger.info()
data = httptools.downloadpage(item.url).data
item.infoLabels["plot"] = scrapertools.find_single_match(data, '(?i)<h2>SINOPSIS.*?<p>(.*?)</p>')
item.infoLabels["trailer"] = scrapertools.find_single_match(data, 'src="(http://www.youtube.com/embed/[^"]+)"')
itemlist = servertools.find_video_items(item=item, data=data)
for it in itemlist:
it.thumbnail = item.thumbnail
it.text_color = color2
itemlist.append(item.clone(action="add_pelicula_to_library", title="Añadir película a la videoteca"))
if item.infoLabels["trailer"]:
folder = True
if config.is_xbmc():
folder = False
itemlist.append(item.clone(channel="trailertools", action="buscartrailer", title="Ver Trailer", folder=folder,
contextual=not folder))
return itemlist

75
kodi/channels/cinecalidad.json Executable file
View File

@@ -0,0 +1,75 @@
{
"id": "cinecalidad",
"name": "CineCalidad",
"compatible": {
"addon_version": "4.3"
},
"active": true,
"adult": false,
"language": "es",
"thumbnail": "https://s31.postimg.org/puxmvsi7v/cinecalidad.png",
"banner": "https://s32.postimg.org/kihkdpx1x/banner_cinecalidad.png",
"version": 1,
"changes": [
{
"date": "24/06/2017",
"description": "Cambios para autoplay"
},
{
"date": "06/06/2017",
"description": "Compatibilidad con AutoPlay"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "18/06/2016",
"description": "First release."
}
],
"categories": [
"latino",
"movie"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostrar enlaces en idioma...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": [
"No filtrar",
"Latino",
"Español",
"Portuges"
]
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Peliculas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_infantiles",
"type": "bool",
"label": "Incluir en Novedades - Infantiles",
"default": true,
"enabled": true,
"visible": true
}
]
}

488
kodi/channels/cinecalidad.py Executable file
View File

@@ -0,0 +1,488 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from channels import autoplay
from channels import filtertools
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core import tmdb
from core.item import Item
IDIOMAS = {'latino': 'Latino', 'castellano': 'Español', 'portugues': 'Portugues'}
list_language = IDIOMAS.values()
logger.debug('lista_language: %s' % list_language)
list_quality = ['1080p', '720p', '480p', '360p', '240p', 'default']
list_servers = [
'yourupload',
'thevideos',
'filescdn',
'uptobox',
'okru',
'nowvideo',
'userscloud',
'pcloud',
'usersfiles',
'vidbull',
'openload',
'directo'
]
host = 'http://www.cinecalidad.to'
thumbmx = 'http://flags.fmcdn.net/data/flags/normal/mx.png'
thumbes = 'http://flags.fmcdn.net/data/flags/normal/es.png'
thumbbr = 'http://flags.fmcdn.net/data/flags/normal/br.png'
def mainlist(item):
idioma2 = "destacadas"
logger.info()
autoplay.init(item.channel, list_servers, list_quality)
itemlist = []
itemlist.append(
item.clone(title="CineCalidad Latino",
action="submenu",
host="http://cinecalidad.com/",
thumbnail=thumbmx,
extra="peliculas",
language='latino'
))
itemlist.append(item.clone(title="CineCalidad España",
action="submenu",
host="http://cinecalidad.com/espana/",
thumbnail=thumbes,
extra="peliculas",
language='castellano'
))
itemlist.append(
item.clone(title="CineCalidad Brasil",
action="submenu",
host="http://cinemaqualidade.com/",
thumbnail=thumbbr,
extra="filmes",
language='portugues'
))
autoplay.show_option(item.channel, itemlist)
return itemlist
def submenu(item):
idioma = 'peliculas'
idioma2 = "destacada"
host = item.host
if item.host == "http://cinemaqualidade.com/":
idioma = "filmes"
idioma2 = "destacado"
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel,
title=idioma.capitalize(),
action="peliculas",
url=host,
thumbnail='https://s8.postimg.org/6wqwy2c2t/peliculas.png',
fanart='https://s8.postimg.org/6wqwy2c2t/peliculas.png',
language=item.language
))
itemlist.append(Item(channel=item.channel,
title="Destacadas",
action="peliculas",
url=host + "/genero-" + idioma + "/" + idioma2 + "/",
thumbnail='https://s30.postimg.org/humqxklsx/destacadas.png',
fanart='https://s30.postimg.org/humqxklsx/destacadas.png',
language=item.language
))
itemlist.append(Item(channel=item.channel,
title="Generos",
action="generos",
url=host + "/genero-" + idioma,
thumbnail='https://s3.postimg.org/5s9jg2wtf/generos.png',
fanart='https://s3.postimg.org/5s9jg2wtf/generos.png',
language=item.language
))
itemlist.append(Item(channel=item.channel,
title="Por Año",
action="anyos",
url=host + "/" + idioma + "-por-ano",
thumbnail='https://s8.postimg.org/7eoedwfg5/pora_o.png',
fanart='https://s8.postimg.org/7eoedwfg5/pora_o.png',
language=item.language
))
itemlist.append(Item(channel=item.channel,
title="Buscar",
action="search",
thumbnail='https://s30.postimg.org/pei7txpa9/buscar.png',
url=host + '/apiseries/seriebyword/',
fanart='https://s30.postimg.org/pei7txpa9/buscar.png',
host=item.host,
language=item.language
))
return itemlist
def anyos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<a href="([^"]+)">([^<]+)</a> '
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
url = urlparse.urljoin(item.url, scrapedurl)
title = scrapedtitle
thumbnail = item.thumbnail
plot = item.plot
itemlist.append(
Item(channel=item.channel,
action="peliculas",
title=title,
url=url,
thumbnail=thumbnail,
plot=plot,
fanart=item.thumbnail,
language=item.language
))
return itemlist
def generos(item):
tgenero = {"Comedia": "https://s7.postimg.org/ne9g9zgwb/comedia.png",
"Suspenso": "https://s13.postimg.org/wmw6vl1cn/suspenso.png",
"Drama": "https://s16.postimg.org/94sia332d/drama.png",
"Acción": "https://s3.postimg.org/y6o9puflv/accion.png",
"Aventura": "https://s10.postimg.org/6su40czih/aventura.png",
"Romance": "https://s15.postimg.org/fb5j8cl63/romance.png",
"Fantas\xc3\xada": "https://s13.postimg.org/65ylohgvb/fantasia.png",
"Infantil": "https://s23.postimg.org/g5rmazozv/infantil.png",
"Ciencia ficción": "https://s9.postimg.org/diu70s7j3/cienciaficcion.png",
"Terror": "https://s7.postimg.org/yi0gij3gb/terror.png",
"Com\xc3\xa9dia": "https://s7.postimg.org/ne9g9zgwb/comedia.png",
"Suspense": "https://s13.postimg.org/wmw6vl1cn/suspenso.png",
"A\xc3\xa7\xc3\xa3o": "https://s3.postimg.org/y6o9puflv/accion.png",
"Fantasia": "https://s13.postimg.org/65ylohgvb/fantasia.png",
"Fic\xc3\xa7\xc3\xa3o cient\xc3\xadfica": "https://s9.postimg.org/diu70s7j3/cienciaficcion.png"}
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<li id="menu-item-.*?" class="menu-item menu-item-type-taxonomy menu-item-object-category ' \
'menu-item-.*?"><a href="([^"]+)">([^<]+)<\/a></li>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
url = urlparse.urljoin(item.url, scrapedurl)
title = scrapedtitle
thumbnail = tgenero[scrapedtitle]
plot = item.plot
itemlist.append(
Item(channel=item.channel,
action="peliculas",
title=title, url=url,
thumbnail=thumbnail,
plot=plot,
fanart=item.thumbnail,
language=item.language
))
return itemlist
def peliculas(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<div class="home_post_cont.*? post_box">.*?<a href="([^"]+)".*?src="([^"]+)".*?title="(.*?) \((' \
'.*?)\)".*?p&gt;([^&]+)&lt;'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedyear, scrapedplot in matches:
url = urlparse.urljoin(item.url, scrapedurl)
contentTitle = scrapedtitle
title = scrapedtitle + ' (' + scrapedyear + ')'
thumbnail = scrapedthumbnail
plot = scrapedplot
year = scrapedyear
itemlist.append(
Item(channel=item.channel,
action="findvideos",
title=title,
url=url,
thumbnail=thumbnail,
plot=plot,
fanart='https://s31.postimg.org/puxmvsi7v/cinecalidad.png',
contentTitle=contentTitle,
infoLabels={'year': year},
language=item.language,
context=autoplay.context
))
try:
patron = "<link rel='next' href='([^']+)' />"
next_page = re.compile(patron, re.DOTALL).findall(data)
itemlist.append(Item(channel=item.channel,
action="peliculas",
title="Página siguiente >>",
url=next_page[0],
fanart='https://s31.postimg.org/puxmvsi7v/cinecalidad.png',
language=item.language
))
except:
pass
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
def dec(item):
link = []
val = item.split(' ')
link = map(int, val)
for i in range(len(link)):
link[i] = link[i] - 7
real = ''.join(map(chr, link))
return (real)
def findvideos(item):
servidor = {"http://uptobox.com/": "uptobox",
"http://userscloud.com/": "userscloud",
"https://my.pcloud.com/publink/show?code=": "pcloud",
"http://thevideos.tv/": "thevideos",
"http://ul.to/": "uploadedto",
"http://turbobit.net/": "turbobit",
"http://www.cinecalidad.com/protect/v.html?i=": "cinecalidad",
"http://www.mediafire.com/download/": "mediafire",
"https://www.youtube.com/watch?v=": "youtube",
"http://thevideos.tv/embed-": "thevideos",
"//www.youtube.com/embed/": "youtube",
"http://ok.ru/video/": "okru",
"http://ok.ru/videoembed/": "okru",
"http://www.cinemaqualidade.com/protect/v.html?i=": "cinemaqualidade.com",
"http://usersfiles.com/": "usersfiles",
"https://depositfiles.com/files/": "depositfiles",
"http://www.nowvideo.sx/video/": "nowvideo",
"http://vidbull.com/": "vidbull",
"http://filescdn.com/": "filescdn",
"https://www.yourupload.com/watch/": "yourupload",
"http://www.cinecalidad.to/protect/gdredirect.php?l=": "directo",
"https://openload.co/embed/": "openload"
}
logger.info()
itemlist = []
duplicados = []
data = httptools.downloadpage(item.url).data
patron = 'dec\("([^"]+)"\)\+dec\("([^"]+)"\)'
matches = re.compile(patron, re.DOTALL).findall(data)
recomendados = ["uptobox", "thevideos", "nowvideo", "pcloud", "directo"]
for scrapedurl, scrapedtitle in matches:
if dec(scrapedurl) in servidor:
server = servidor[dec(scrapedurl)]
title = "Ver " + item.contentTitle + " en " + servidor[dec(scrapedurl)].upper()
if 'yourupload' in dec(scrapedurl):
url = dec(scrapedurl).replace('watch', 'embed') + dec(scrapedtitle)
elif 'gdredirect' in dec(scrapedurl):
url = ''
url_list = []
url_list += get_urls(item, dec(scrapedtitle))
for video in url_list:
new_title = title + ' (' + video['label'] + ')'
itemlist.append(
Item(channel=item.channel,
action="play",
title=new_title,
fulltitle=item.title,
url=video['file'],
language=IDIOMAS[item.language],
thumbnail=item.thumbnail,
plot=item.plot,
quality=video['label'],
server='directo'
))
duplicados.append(title)
else:
if 'youtube' in dec(scrapedurl):
title = '[COLOR orange]Trailer en Youtube[/COLOR]'
url = dec(scrapedurl) + dec(scrapedtitle)
if (servidor[dec(scrapedurl)]) in recomendados:
title = title + "[COLOR limegreen] [I] (Recomedado) [/I] [/COLOR]"
thumbnail = servertools.guess_server_thumbnail(servidor[dec(scrapedurl)])
plot = ""
if title not in duplicados and url != '':
new_item = Item(channel=item.channel,
action="play",
title=title,
fulltitle=item.title,
url=url,
thumbnail=thumbnail,
plot=item.plot,
extra=item.thumbnail,
language=IDIOMAS[item.language],
quality='default',
server=server.lower()
)
if 'Trailer' in new_item.title:
trailer_item = new_item
else:
itemlist.append(new_item)
duplicados.append(title)
# Requerido para FilterTools
itemlist = filtertools.get_links(itemlist, item, list_language)
# Requerido para AutoPlay
autoplay.start(itemlist, item)
itemlist.append(trailer_item)
if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'findvideos':
itemlist.append(
Item(channel=item.channel,
title='[COLOR yellow]Añadir esta pelicula a la videoteca[/COLOR]',
url=item.url,
action="add_pelicula_to_library",
extra="findvideos",
contentTitle=item.contentTitle,
))
return itemlist
def get_urls(item, link):
from core import jsontools
logger.info()
url = 'http://www.cinecalidad.to/ccstream/ccstream.php'
headers = dict()
headers["Referer"] = item.url
post = 'link=%s' % link
data = httptools.downloadpage(url, post=post, headers=headers).data
dict_data = jsontools.load(data)
logger.debug(dict_data['link'])
logger.debug(data)
return dict_data['link']
def play(item):
logger.info()
itemlist = []
logger.debug('item: %s' % item)
if 'juicyapi' not in item.url:
itemlist = servertools.find_video_items(data=item.url)
for videoitem in itemlist:
videoitem.title = item.fulltitle
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.extra
videoitem.channel = item.channel
else:
itemlist.append(item)
return itemlist
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == 'peliculas':
item.url = 'http://www.cinecalidad.to'
elif categoria == 'infantiles':
item.url = 'http://www.cinecalidad.to/genero-peliculas/infantil/'
itemlist = peliculas(item)
if itemlist[-1].title == 'Página siguiente >>':
itemlist.pop()
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def busqueda(item):
logger.info()
itemlist = []
# Descarga la página
data = httptools.downloadpage(item.url).data
from core import jsontools
data = jsontools.load(data)
for entry in data["results"]:
title = entry["richSnippet"]["metatags"]["ogTitle"]
url = entry["url"]
plot = entry["content"]
plot = scrapertools.htmlclean(plot)
thumbnail = entry["richSnippet"]["metatags"]["ogImage"]
title = scrapertools.find_single_match(title, '(.*?) \(.*?\)')
year = re.sub(r'.*?\((\d{4})\)', '', title)
title = year
fulltitle = title
new_item = item.clone(action="findvideos",
title=title,
fulltitle=fulltitle,
url=url,
thumbnail=thumbnail,
contentTitle=title,
contentType="movie",
plot=plot,
infoLabels={'year': year, 'sinopsis': plot}
)
itemlist.append(new_item)
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
actualpage = int(scrapertools.find_single_match(item.url, 'start=(\d+)'))
totalresults = int(data["cursor"]["resultCount"])
if actualpage + 20 <= totalresults:
url_next = item.url.replace("start=" + str(actualpage), "start=" + str(actualpage + 20))
itemlist.append(
Item(channel=item.channel,
action="busqueda",
title=">> Página Siguiente",
url=url_next
))
return itemlist
def search(item, texto):
logger.info()
data = httptools.downloadpage(host).data
cx = scrapertools.find_single_match(data, 'name="cx" value="(.*?)"')
texto = texto.replace(" ", "%20")
item.url = "https://www.googleapis.com/customsearch/v1element?key=AIzaSyCVAXiUzRYsML1Pv6RwSG1gunmMikTzQqY&rsz" \
"=filtered_cse&num=20&hl=es&sig=0c3990ce7a056ed50667fe0c3873c9b6&cx=%s&q=%s&sort=&googlehost=www" \
".google.com&start=0" % (cx, texto)
try:
return busqueda(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []

157
kodi/channels/cinefox.json Executable file
View File

@@ -0,0 +1,157 @@
{
"id": "cinefox",
"name": "Cinefox",
"active": true,
"adult": false,
"language": "es",
"version": 1,
"thumbnail": "cinefox.png",
"banner": "cinefox.png",
"changes": [
{
"date": "05/04/2017",
"description": "Cambio en los servidores"
},
{
"date": "21/03/2017",
"description": "Adaptado a httptools y corregido episodios para videoteca"
},
{
"date": "18/07/2016",
"description": "Primera version"
}
],
"categories": [
"movie",
"tvshow",
"latino",
"vos"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "save_last_search",
"type": "bool",
"label": "Guardar última búsqueda",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Películas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_series",
"type": "bool",
"label": "Incluir en Novedades - Series",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "modo_grafico",
"type": "bool",
"label": "Buscar información extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 2,
"enabled": true,
"visible": true,
"lvalues": [
"Perfil 3",
"Perfil 2",
"Perfil 1",
"Ninguno"
]
},
{
"id": "menu_info",
"type": "bool",
"label": "Mostrar menú intermedio película/episodio",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "last_page",
"type": "bool",
"label": "Ocultar opción elegir página en películas (Kodi)",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "filtro_defecto_peliculas",
"type": "label",
"enabled": true,
"visible": false
},
{
"id": "pers_peliculas1",
"type": "label",
"enabled": true,
"visible": false
},
{
"id": "pers_peliculas2",
"type": "label",
"enabled": true,
"visible": false
},
{
"pers_peliculas3": {
"type": "label",
"enabled": true,
"visible": false
}
},
{
"id": "filtro_defecto_series",
"type": "label",
"enabled": true,
"visible": false
},
{
"id": "pers_series1",
"type": "label",
"enabled": true,
"visible": false
},
{
"id": "pers_series2",
"type": "label",
"enabled": true,
"visible": false
},
{
"id": "pers_series3",
"type": "label",
"enabled": true,
"visible": false
},
{
"id": "last_search",
"type": "text",
"enabled": true,
"visible": false
}
]
}

745
kodi/channels/cinefox.py Executable file
View File

@@ -0,0 +1,745 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import config
from core import httptools
from core import jsontools
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
__modo_grafico__ = config.get_setting('modo_grafico', 'cinefox')
__perfil__ = int(config.get_setting('perfil', "cinefox"))
__menu_info__ = config.get_setting('menu_info', 'cinefox')
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00', '0xFFFE2E2E', '0xFF088A08'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E', '0xFFFE2E2E', '0xFF088A08'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE', '0xFFFE2E2E', '0xFF088A08']]
if __perfil__ < 3:
color1, color2, color3, color4, color5 = perfil[__perfil__]
else:
color1 = color2 = color3 = color4 = color5 = ""
host = "http://www.cinefox.tv"
def mainlist(item):
logger.info()
item.text_color = color1
itemlist = []
itemlist.append(item.clone(action="seccion_peliculas", title="Películas", fanart="http://i.imgur.com/PjJaW8o.png",
url="http://www.cinefox.tv/catalogue?type=peliculas"))
# Seccion series
itemlist.append(item.clone(action="seccion_series", title="Series",
url="http://www.cinefox.tv/ultimos-capitulos", fanart="http://i.imgur.com/9loVksV.png"))
itemlist.append(item.clone(action="peliculas", title="Documentales", fanart="http://i.imgur.com/Q7fsFI6.png",
url="http://www.cinefox.tv/catalogue?type=peliculas&genre=documental"))
if config.get_setting("adult_mode") != 0:
itemlist.append(item.clone(action="peliculas", title="Sección Adultos +18",
url="http://www.cinefox.tv/catalogue?type=adultos",
fanart="http://i.imgur.com/kIvE1Zh.png"))
itemlist.append(item.clone(title="Buscar...", action="local_search"))
itemlist.append(item.clone(title="Configurar canal...", text_color="gold", action="configuracion", folder=False))
return itemlist
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
platformtools.itemlist_refresh()
return ret
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = "http://www.cinefox.tv/search?q=%s" % texto
try:
return busqueda(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def local_search(item):
logger.info()
text = ""
if config.get_setting("save_last_search", item.channel):
text = config.get_setting("last_search", item.channel)
from platformcode import platformtools
texto = platformtools.dialog_input(default=text, heading="Buscar en Cinefox")
if texto is None:
return
if config.get_setting("save_last_search", item.channel):
config.set_setting("last_search", texto, item.channel)
return search(item, texto)
def busqueda(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<div class="poster-media-card">(.*?)(?:<li class="search-results-item media-item">|<footer>)'
bloque = scrapertools.find_multiple_matches(data, patron)
for match in bloque:
patron = 'href="([^"]+)" title="([^"]+)".*?src="([^"]+)".*?' \
'<p class="search-results-main-info">.*?del año (\d+).*?' \
'p class.*?>(.*?)<'
matches = scrapertools.find_multiple_matches(match, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, year, plot in matches:
scrapedtitle = scrapedtitle.capitalize()
item.infoLabels["year"] = year
plot = scrapertools.htmlclean(plot)
if "/serie/" in scrapedurl:
action = "episodios"
show = scrapedtitle
scrapedurl += "/episodios"
title = " [Serie]"
contentType = "tvshow"
elif "/pelicula/" in scrapedurl:
action = "menu_info"
show = ""
title = " [Película]"
contentType = "movie"
else:
continue
title = scrapedtitle + title + " (" + year + ")"
itemlist.append(item.clone(action=action, title=title, url=scrapedurl, thumbnail=scrapedthumbnail,
contentTitle=scrapedtitle, fulltitle=scrapedtitle,
plot=plot, show=show, text_color=color2, contentType=contentType))
try:
from core import tmdb
tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__)
except:
pass
next_page = scrapertools.find_single_match(data, 'href="([^"]+)"[^>]+>Más resultados')
if next_page != "":
next_page = urlparse.urljoin(host, next_page)
itemlist.append(Item(channel=item.channel, action="busqueda", title=">> Siguiente", url=next_page,
thumbnail=item.thumbnail, text_color=color3))
return itemlist
def filtro(item):
logger.info()
list_controls = []
valores = {}
strings = {}
# Se utilizan los valores por defecto/guardados o los del filtro personalizado
if not item.values:
valores_guardados = config.get_setting("filtro_defecto_" + item.extra, item.channel)
else:
valores_guardados = item.values
item.values = ""
if valores_guardados:
dict_values = valores_guardados
else:
dict_values = None
if dict_values:
dict_values["filtro_per"] = 0
excluidos = ['País', 'Películas', 'Series', 'Destacar']
data = httptools.downloadpage(item.url).data
matches = scrapertools.find_multiple_matches(data, '<div class="dropdown-sub[^>]+>(\S+)(.*?)</ul>')
i = 0
for filtro_title, values in matches:
if filtro_title in excluidos: continue
filtro_title = filtro_title.replace("Tendencia", "Ordenar por")
id = filtro_title.replace("Género", "genero").replace("Año", "year").replace(" ", "_").lower()
list_controls.append({'id': id, 'label': filtro_title, 'enabled': True,
'type': 'list', 'default': 0, 'visible': True})
valores[id] = []
valores[id].append('')
strings[filtro_title] = []
list_controls[i]['lvalues'] = []
if filtro_title == "Ordenar por":
list_controls[i]['lvalues'].append('Más recientes')
strings[filtro_title].append('Más recientes')
else:
list_controls[i]['lvalues'].append('Cualquiera')
strings[filtro_title].append('Cualquiera')
patron = '<li>.*?(?:genre|release|quality|language|order)=([^"]+)">([^<]+)<'
matches_v = scrapertools.find_multiple_matches(values, patron)
for value, key in matches_v:
if value == "action-adventure": continue
list_controls[i]['lvalues'].append(key)
valores[id].append(value)
strings[filtro_title].append(key)
i += 1
item.valores = valores
item.strings = strings
if "Filtro Personalizado" in item.title:
return filtrado(item, valores_guardados)
list_controls.append({'id': 'espacio', 'label': '', 'enabled': False,
'type': 'label', 'default': '', 'visible': True})
list_controls.append({'id': 'save', 'label': 'Establecer como filtro por defecto', 'enabled': True,
'type': 'bool', 'default': False, 'visible': True})
list_controls.append({'id': 'filtro_per', 'label': 'Guardar filtro en acceso directo...', 'enabled': True,
'type': 'list', 'default': 0, 'visible': True, 'lvalues': ['No guardar', 'Filtro 1',
'Filtro 2', 'Filtro 3']})
list_controls.append({'id': 'remove', 'label': 'Eliminar filtro personalizado...', 'enabled': True,
'type': 'list', 'default': 0, 'visible': True, 'lvalues': ['No eliminar', 'Filtro 1',
'Filtro 2', 'Filtro 3']})
from platformcode import platformtools
return platformtools.show_channel_settings(list_controls=list_controls, dict_values=dict_values,
caption="Filtra los resultados", item=item, callback='filtrado')
def filtrado(item, values):
values_copy = values.copy()
# Guarda el filtro para que sea el que se cargue por defecto
if "save" in values and values["save"]:
values_copy.pop("remove")
values_copy.pop("filtro_per")
values_copy.pop("save")
config.set_setting("filtro_defecto_" + item.extra, values_copy, item.channel)
# Elimina el filtro personalizado elegido
if "remove" in values and values["remove"] != 0:
config.set_setting("pers_" + item.extra + str(values["remove"]), "", item.channel)
values_copy = values.copy()
# Guarda el filtro en un acceso directo personalizado
if "filtro_per" in values and values["filtro_per"] != 0:
index = item.extra + str(values["filtro_per"])
values_copy.pop("filtro_per")
values_copy.pop("save")
values_copy.pop("remove")
config.set_setting("pers_" + index, values_copy, item.channel)
genero = item.valores["genero"][values["genero"]]
year = item.valores["year"][values["year"]]
calidad = item.valores["calidad"][values["calidad"]]
idioma = item.valores["idioma"][values["idioma"]]
order = item.valores["ordenar_por"][values["ordenar_por"]]
strings = []
for key, value in dict(item.strings).items():
key2 = key.replace("Género", "genero").replace("Año", "year").replace(" ", "_").lower()
strings.append(key + ": " + value[values[key2]])
item.valores = "Filtro: " + ", ".join(sorted(strings))
item.strings = ""
item.url = "http://www.cinefox.tv/catalogue?type=%s&genre=%s&release=%s&quality=%s&language=%s&order=%s" % \
(item.extra, genero, year, calidad, idioma, order)
return globals()[item.extra](item)
def seccion_peliculas(item):
logger.info()
itemlist = []
# Seccion peliculas
itemlist.append(item.clone(action="peliculas", title="Novedades", fanart="http://i.imgur.com/PjJaW8o.png",
url="http://www.cinefox.tv/catalogue?type=peliculas"))
itemlist.append(item.clone(action="peliculas", title="Estrenos",
url="http://www.cinefox.tv/estrenos-de-cine", fanart="http://i.imgur.com/PjJaW8o.png"))
itemlist.append(item.clone(action="filtro", title="Filtrar películas", extra="peliculas",
url="http://www.cinefox.tv/catalogue?type=peliculas",
fanart="http://i.imgur.com/PjJaW8o.png"))
# Filtros personalizados para peliculas
for i in range(1, 4):
filtros = config.get_setting("pers_peliculas" + str(i), item.channel)
if filtros:
title = "Filtro Personalizado " + str(i)
new_item = item.clone()
new_item.values = filtros
itemlist.append(new_item.clone(action="filtro", title=title, fanart="http://i.imgur.com/PjJaW8o.png",
url="http://www.cinefox.tv/catalogue?type=peliculas", extra="peliculas"))
itemlist.append(item.clone(action="mapa", title="Mapa de películas", extra="peliculas",
url="http://www.cinefox.tv/mapa-de-peliculas",
fanart="http://i.imgur.com/PjJaW8o.png"))
return itemlist
def seccion_series(item):
logger.info()
itemlist = []
# Seccion series
itemlist.append(item.clone(action="ultimos", title="Últimos capítulos",
url="http://www.cinefox.tv/ultimos-capitulos", fanart="http://i.imgur.com/9loVksV.png"))
itemlist.append(item.clone(action="series", title="Series recientes",
url="http://www.cinefox.tv/catalogue?type=series",
fanart="http://i.imgur.com/9loVksV.png"))
itemlist.append(item.clone(action="filtro", title="Filtrar series", extra="series",
url="http://www.cinefox.tv/catalogue?type=series",
fanart="http://i.imgur.com/9loVksV.png"))
# Filtros personalizados para series
for i in range(1, 4):
filtros = config.get_setting("pers_series" + str(i), item.channel)
if filtros:
title = " Filtro Personalizado " + str(i)
new_item = item.clone()
new_item.values = filtros
itemlist.append(new_item.clone(action="filtro", title=title, fanart="http://i.imgur.com/9loVksV.png",
url="http://www.cinefox.tv/catalogue?type=series", extra="series"))
itemlist.append(item.clone(action="mapa", title="Mapa de series", extra="series",
url="http://www.cinefox.tv/mapa-de-series",
fanart="http://i.imgur.com/9loVksV.png"))
return itemlist
def mapa(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<li class="sitemap-initial"> <a class="initial-link" href="([^"]+)">(.*?)</a>'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedtitle in matches:
scrapedurl = host + scrapedurl
scrapedtitle = scrapedtitle.capitalize()
itemlist.append(item.clone(title=scrapedtitle, url=scrapedurl, action=item.extra, extra="mapa"))
return itemlist
def peliculas(item):
logger.info()
itemlist = []
if "valores" in item and item.valores:
itemlist.append(item.clone(action="", title=item.valores, text_color=color4))
if __menu_info__:
action = "menu_info"
else:
action = "findvideos"
data = httptools.downloadpage(item.url).data
bloque = scrapertools.find_multiple_matches(data,
'<div class="media-card "(.*?)<div class="hidden-info">')
for match in bloque:
if item.extra == "mapa":
patron = '.*?src="([^"]+)".*?href="([^"]+)">([^<]+)</a>'
matches = scrapertools.find_multiple_matches(match, patron)
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
url = urlparse.urljoin(host, scrapedurl)
itemlist.append(Item(channel=item.channel, action=action, title=scrapedtitle, url=url, extra="media",
thumbnail=scrapedthumbnail, contentTitle=scrapedtitle, fulltitle=scrapedtitle,
text_color=color2, contentType="movie"))
else:
patron = '<div class="audio-info">(.*?)</div>(.*?)' \
'src="([^"]+)".*?href="([^"]+)">([^<]+)</a>'
matches = scrapertools.find_multiple_matches(match, patron)
for idiomas, calidad, scrapedthumbnail, scrapedurl, scrapedtitle in matches:
calidad = scrapertools.find_single_match(calidad, '<div class="quality-info".*?>([^<]+)</div>')
if calidad:
calidad = calidad.capitalize().replace("Hd", "HD")
audios = []
if "medium-es" in idiomas: audios.append('CAST')
if "medium-vs" in idiomas: audios.append('VOSE')
if "medium-la" in idiomas: audios.append('LAT')
if "medium-en" in idiomas or 'medium-"' in idiomas:
audios.append('V.O')
title = "%s [%s]" % (scrapedtitle, "/".join(audios))
if calidad:
title += " (%s)" % calidad
url = urlparse.urljoin(host, scrapedurl)
itemlist.append(Item(channel=item.channel, action=action, title=title, url=url, extra="media",
thumbnail=scrapedthumbnail, contentTitle=scrapedtitle, fulltitle=scrapedtitle,
text_color=color2, contentType="movie"))
next_page = scrapertools.find_single_match(data, 'href="([^"]+)"[^>]+>Siguiente')
if next_page != "" and item.title != "":
itemlist.append(Item(channel=item.channel, action="peliculas", title=">> Siguiente", url=next_page,
thumbnail=item.thumbnail, extra=item.extra, text_color=color3))
if not config.get_setting("last_page", item.channel) and config.is_xbmc():
itemlist.append(Item(channel=item.channel, action="select_page", title="Ir a página...", url=next_page,
thumbnail=item.thumbnail, text_color=color5))
return itemlist
def ultimos(item):
logger.info()
item.text_color = color2
if __menu_info__:
action = "menu_info_episode"
else:
action = "episodios"
itemlist = []
data = httptools.downloadpage(item.url).data
bloque = scrapertools.find_multiple_matches(data, '<div class="media-card "(.*?)<div class="info-availability '
'one-line">')
for match in bloque:
patron = '<div class="audio-info">(.*?)<img class.*?src="([^"]+)".*?href="([^"]+)">([^<]+)</a>'
matches = scrapertools.find_multiple_matches(match, patron)
for idiomas, scrapedthumbnail, scrapedurl, scrapedtitle in matches:
show = re.sub(r'(\s*[\d]+x[\d]+\s*)', '', scrapedtitle)
audios = []
if "medium-es" in idiomas: audios.append('CAST')
if "medium-vs" in idiomas: audios.append('VOSE')
if "medium-la" in idiomas: audios.append('LAT')
if "medium-en" in idiomas or 'medium-"' in idiomas:
audios.append('V.O')
title = "%s - %s" % (show, re.sub(show, '', scrapedtitle))
if audios:
title += " [%s]" % "/".join(audios)
url = urlparse.urljoin(host, scrapedurl)
itemlist.append(item.clone(action=action, title=title, url=url, thumbnail=scrapedthumbnail,
contentTitle=show, fulltitle=show, show=show,
text_color=color2, extra="ultimos", contentType="tvshow"))
try:
from core import tmdb
tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__)
except:
pass
next_page = scrapertools.find_single_match(data, 'href="([^"]+)"[^>]+>Siguiente')
if next_page != "":
itemlist.append(item.clone(action="ultimos", title=">> Siguiente", url=next_page, text_color=color3))
return itemlist
def series(item):
logger.info()
itemlist = []
if "valores" in item:
itemlist.append(item.clone(action="", title=item.valores, text_color=color4))
data = httptools.downloadpage(item.url).data
bloque = scrapertools.find_multiple_matches(data, '<div class="media-card "(.*?)<div class="hidden-info">')
for match in bloque:
patron = '<img class.*?src="([^"]+)".*?href="([^"]+)">([^<]+)</a>'
matches = scrapertools.find_multiple_matches(match, patron)
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
url = urlparse.urljoin(host, scrapedurl + "/episodios")
itemlist.append(Item(channel=item.channel, action="episodios", title=scrapedtitle, url=url,
thumbnail=scrapedthumbnail, contentTitle=scrapedtitle, fulltitle=scrapedtitle,
show=scrapedtitle, text_color=color2, contentType="tvshow"))
try:
from core import tmdb
tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__)
except:
pass
next_page = scrapertools.find_single_match(data, 'href="([^"]+)"[^>]+>Siguiente')
if next_page != "":
title = ">> Siguiente - Página " + scrapertools.find_single_match(next_page, 'page=(\d+)')
itemlist.append(Item(channel=item.channel, action="series", title=title, url=next_page,
thumbnail=item.thumbnail, extra=item.extra, text_color=color3))
return itemlist
def menu_info(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
year = scrapertools.find_single_match(data, '<div class="media-summary">.*?release.*?>(\d+)<')
if year != "" and not "tmdb_id" in item.infoLabels:
try:
from core import tmdb
item.infoLabels["year"] = year
tmdb.set_infoLabels_item(item, __modo_grafico__)
except:
pass
if item.infoLabels["plot"] == "":
sinopsis = scrapertools.find_single_match(data, '<p id="media-plot".*?>.*?\.\.\.(.*?)Si te parece')
item.infoLabels["plot"] = scrapertools.htmlclean(sinopsis)
id = scrapertools.find_single_match(item.url, '/(\d+)/')
data_trailer = httptools.downloadpage("http://www.cinefox.tv/media/trailer?idm=%s&mediaType=1" % id).data
trailer_url = jsontools.load(data_trailer)["video"]["url"]
if trailer_url != "":
item.infoLabels["trailer"] = trailer_url
title = "Ver enlaces %s - [" + item.contentTitle + "]"
itemlist.append(item.clone(action="findvideos", title=title % "Online", extra="media", type="streaming"))
itemlist.append(item.clone(action="findvideos", title=title % "de Descarga", extra="media", type="download"))
itemlist.append(item.clone(channel="trailertools", action="buscartrailer", title="Buscar Tráiler",
text_color="magenta", context=""))
if config.get_videolibrary_support():
itemlist.append(Item(channel=item.channel, action="add_pelicula_to_library", text_color=color5,
title="Añadir película a la videoteca", url=item.url, thumbnail=item.thumbnail,
fanart=item.fanart, fulltitle=item.fulltitle,
extra="media|"))
return itemlist
def episodios(item):
logger.info()
itemlist = []
if item.extra == "ultimos":
data = httptools.downloadpage(item.url).data
item.url = scrapertools.find_single_match(data, '<a href="([^"]+)" class="h1-like media-title"')
item.url += "/episodios"
data = httptools.downloadpage(item.url).data
data_season = data[:]
if "episodios" in item.extra or not __menu_info__ or item.path:
action = "findvideos"
else:
action = "menu_info_episode"
seasons = scrapertools.find_multiple_matches(data, '<a href="([^"]+)"[^>]+><span class="season-toggle')
for i, url in enumerate(seasons):
if i != 0:
data_season = httptools.downloadpage(url, add_referer=True).data
patron = '<div class="ep-list-number">.*?href="([^"]+)">([^<]+)</a>.*?<span class="name">([^<]+)</span>'
matches = scrapertools.find_multiple_matches(data_season, patron)
for scrapedurl, episode, scrapedtitle in matches:
new_item = item.clone(action=action, url=scrapedurl, text_color=color2, contentType="episode")
new_item.contentSeason = episode.split("x")[0]
new_item.contentEpisodeNumber = episode.split("x")[1]
new_item.title = episode + " - " + scrapedtitle
new_item.extra = "episode"
if "episodios" in item.extra or item.path:
new_item.extra = "episode|"
itemlist.append(new_item)
if "episodios" not in item.extra and not item.path:
try:
from core import tmdb
tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__)
except:
pass
itemlist.reverse()
if "episodios" not in item.extra and not item.path:
id = scrapertools.find_single_match(item.url, '/(\d+)/')
data_trailer = httptools.downloadpage("http://www.cinefox.tv/media/trailer?idm=%s&mediaType=1" % id).data
item.infoLabels["trailer"] = jsontools.load(data_trailer)["video"]["url"]
itemlist.append(item.clone(channel="trailertools", action="buscartrailer", title="Buscar Tráiler",
text_color="magenta"))
if config.get_videolibrary_support():
itemlist.append(Item(channel=item.channel, action="add_serie_to_library", text_color=color5,
title="Añadir serie a la videoteca", show=item.show, thumbnail=item.thumbnail,
url=item.url, fulltitle=item.fulltitle, fanart=item.fanart,
extra="episodios###episodios",
contentTitle=item.fulltitle))
return itemlist
def menu_info_episode(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
if item.show == "":
item.show = scrapertools.find_single_match(data, 'class="h1-like media-title".*?>([^<]+)</a>')
episode = scrapertools.find_single_match(data, '<span class="indicator">([^<]+)</span>')
item.infoLabels["season"] = episode.split("x")[0]
item.infoLabels["episode"] = episode.split("x")[1]
try:
from core import tmdb
tmdb.set_infoLabels_item(item, __modo_grafico__)
except:
pass
if item.infoLabels["plot"] == "":
sinopsis = scrapertools.find_single_match(data, 'id="episode-plot">(.*?)</p>')
if not "No hay sinopsis" in sinopsis:
item.infoLabels["plot"] = scrapertools.htmlclean(sinopsis)
title = "Ver enlaces %s - [" + item.show + " " + episode + "]"
itemlist.append(item.clone(action="findvideos", title=title % "Online", extra="episode", type="streaming"))
itemlist.append(item.clone(action="findvideos", title=title % "de Descarga", extra="episode", type="download"))
siguiente = scrapertools.find_single_match(data, '<a class="episode-nav-arrow next" href="([^"]+)" title="([^"]+)"')
if siguiente:
titulo = ">> Siguiente Episodio - [" + siguiente[1] + "]"
itemlist.append(item.clone(action="menu_info_episode", title=titulo, url=siguiente[0], extra="",
text_color=color1))
patron = '<a class="episode-nav-arrow previous" href="([^"]+)" title="([^"]+)"'
anterior = scrapertools.find_single_match(data, patron)
if anterior:
titulo = "<< Episodio Anterior - [" + anterior[1] + "]"
itemlist.append(item.clone(action="menu_info_episode", title=titulo, url=anterior[0], extra="",
text_color=color3))
url_serie = scrapertools.find_single_match(data, '<a href="([^"]+)" class="h1-like media-title"')
url_serie += "/episodios"
itemlist.append(item.clone(title="Ir a la lista de capítulos", action="episodios", url=url_serie, extra="",
text_color=color4))
itemlist.append(item.clone(channel="trailertools", action="buscartrailer", title="Buscar Tráiler",
text_color="magenta", context=""))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
if not "|" in item.extra and not __menu_info__:
data = httptools.downloadpage(item.url, add_referer=True).data
year = scrapertools.find_single_match(data, '<div class="media-summary">.*?release.*?>(\d+)<')
if year != "" and not "tmdb_id" in item.infoLabels:
try:
from core import tmdb
item.infoLabels["year"] = year
tmdb.set_infoLabels_item(item, __modo_grafico__)
except:
pass
if item.infoLabels["plot"] == "":
sinopsis = scrapertools.find_single_match(data, '<p id="media-plot".*?>.*?\.\.\.(.*?)Si te parece')
item.infoLabels["plot"] = scrapertools.htmlclean(sinopsis)
id = scrapertools.find_single_match(item.url, '/(\d+)/')
if "|" in item.extra or not __menu_info__:
extra = item.extra
if "|" in item.extra:
extra = item.extra[:-1]
url = "http://www.cinefox.tv/sources/list?id=%s&type=%s&order=%s" % (id, extra, "streaming")
itemlist.extend(get_enlaces(item, url, "Online"))
url = "http://www.cinefox.tv/sources/list?id=%s&type=%s&order=%s" % (id, extra, "download")
itemlist.extend(get_enlaces(item, url, "de Descarga"))
if extra == "media":
data_trailer = httptools.downloadpage("http://www.cinefox.tv/media/trailer?idm=%s&mediaType=1" % id).data
trailer_url = jsontools.load(data_trailer)["video"]["url"]
if trailer_url != "":
item.infoLabels["trailer"] = trailer_url
title = "Ver enlaces %s - [" + item.contentTitle + "]"
itemlist.append(item.clone(channel="trailertools", action="buscartrailer", title="Buscar Tráiler",
text_color="magenta", context=""))
if config.get_videolibrary_support() and not "|" in item.extra:
itemlist.append(Item(channel=item.channel, action="add_pelicula_to_library", text_color=color5,
title="Añadir película a la videoteca", url=item.url, thumbnail=item.thumbnail,
fanart=item.fanart, fulltitle=item.fulltitle,
extra="media|"))
else:
url = "http://www.cinefox.tv/sources/list?id=%s&type=%s&order=%s" % (id, item.extra, item.type)
type = item.type.replace("streaming", "Online").replace("download", "de Descarga")
itemlist.extend(get_enlaces(item, url, type))
return itemlist
def get_enlaces(item, url, type):
itemlist = []
itemlist.append(item.clone(action="", title="Enlaces %s" % type, text_color=color1))
data = httptools.downloadpage(url, add_referer=True).data
patron = '<div class="available-source".*?data-url="([^"]+)".*?class="language.*?title="([^"]+)"' \
'.*?class="source-name.*?>\s*([^<]+)<.*?<span class="quality-text">([^<]+)<'
matches = scrapertools.find_multiple_matches(data, patron)
if matches:
for scrapedurl, idioma, server, calidad in matches:
if server == "streamin": server = "streaminto"
if server == "waaw" or server == "miracine": server = "netutv"
if server == "ul": server = "uploadedto"
if server == "player": server = "vimpleru"
if servertools.is_server_enabled(server):
scrapedtitle = " Ver en " + server.capitalize() + " [" + idioma + "/" + calidad + "]"
itemlist.append(item.clone(action="play", url=scrapedurl, title=scrapedtitle, text_color=color2,
extra="", server=server))
if len(itemlist) == 1:
itemlist.append(item.clone(title=" No hay enlaces disponibles", action="", text_color=color2))
return itemlist
def play(item):
logger.info()
itemlist = []
if item.extra != "":
post = "id=%s" % item.extra
data = httptools.downloadpage("http://www.cinefox.tv/goto/", post=post, add_referer=True).data
item.url = scrapertools.find_single_match(data, 'document.location\s*=\s*"([^"]+)"')
url = item.url.replace("http://miracine.tv/n/?etu=", "http://hqq.tv/player/embed_player.php?vid=")
url = url.replace("streamcloud.eu/embed-", "streamcloud.eu/")
if item.server:
enlaces = servertools.findvideosbyserver(url, item.server)[0]
else:
enlaces = servertools.findvideos(url)[0]
itemlist.append(item.clone(url=enlaces[1], server=enlaces[2]))
return itemlist
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == "peliculas":
item.url = "http://www.cinefox.tv/catalogue?type=peliculas"
item.action = "peliculas"
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":
itemlist.pop()
if categoria == "series":
item.url = "http://www.cinefox.tv/ultimos-capitulos"
item.action = "ultimos"
itemlist = ultimos(item)
if itemlist[-1].action == "ultimos":
itemlist.pop()
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def select_page(item):
import xbmcgui
dialog = xbmcgui.Dialog()
number = dialog.numeric(0, "Introduce el número de página")
if number != "":
item.url = re.sub(r'page=(\d+)', "page=" + number, item.url)
return peliculas(item)

54
kodi/channels/cinefoxtv.json Executable file
View File

@@ -0,0 +1,54 @@
{
"id": "cinefoxtv",
"name": "CineFoxTV",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "https://s28.postimg.org/lytn2q1tp/cinefoxtv.png",
"banner": "cinefoxtv.png",
"version": 1,
"changes": [
{
"date": "22/05/2017",
"description": "fix por cambio en la estructura"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "09/02/2017",
"description": "First release."
}
],
"categories": [
"latino",
"movie"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Peliculas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_infantiles",
"type": "bool",
"label": "Incluir en Novedades - Infantiles",
"default": true,
"enabled": true,
"visible": true
}
]
}

209
kodi/channels/cinefoxtv.py Executable file
View File

@@ -0,0 +1,209 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core import tmdb
from core.item import Item
host = 'http://cinefoxtv.net/'
headers = [['User-Agent', 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
['Referer', host]]
global duplicado
global itemlist
global temp_list
canal = 'cinefoxtv'
tgenero = {"Comedia": "https://s7.postimg.org/ne9g9zgwb/comedia.png",
"Suspenso": "https://s13.postimg.org/wmw6vl1cn/suspenso.png",
"Drama": "https://s16.postimg.org/94sia332d/drama.png",
"Acción": "https://s3.postimg.org/y6o9puflv/accion.png",
"Aventuras": "https://s10.postimg.org/6su40czih/aventura.png",
"Animacion": "https://s13.postimg.org/5on877l87/animacion.png",
"Ciencia Ficcion": "https://s9.postimg.org/diu70s7j3/cienciaficcion.png",
"Terror": "https://s7.postimg.org/yi0gij3gb/terror.png",
"Documentales": "https://s16.postimg.org/7xjj4bmol/documental.png",
"Musical": "https://s29.postimg.org/bbxmdh9c7/musical.png",
"Western": "https://s23.postimg.org/lzyfbjzhn/western.png",
"Belico": "https://s23.postimg.org/71itp9hcr/belica.png",
"Crimen": "https://s4.postimg.org/6z27zhirx/crimen.png",
"Biográfica": "https://s15.postimg.org/5lrpbx323/biografia.png",
"Deporte": "https://s13.postimg.org/xuxf5h06v/deporte.png",
"Fantástico": "https://s10.postimg.org/pbkbs6j55/fantastico.png",
"Estrenos": "https://s21.postimg.org/fy69wzm93/estrenos.png",
"Película 18+": "https://s15.postimg.org/exz7kysjf/erotica.png",
"Thriller": "https://s22.postimg.org/5y9g0jsu9/thriller.png",
"Familiar": "https://s7.postimg.org/6s7vdhqrf/familiar.png",
"Romanticas": "https://s21.postimg.org/xfsj7ua0n/romantica.png",
"Intriga": "https://s27.postimg.org/v9og43u2b/intriga.png",
"Infantil": "https://s23.postimg.org/g5rmazozv/infantil.png"}
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Todas", action="lista", thumbnail='https://s18.postimg.org/fwvaeo6qh/todas.png',
fanart='https://s18.postimg.org/fwvaeo6qh/todas.png', extra='peliculas/',
url=host + 'page/1.html'))
itemlist.append(
itemlist[-1].clone(title="Generos", action="generos", thumbnail='https://s3.postimg.org/5s9jg2wtf/generos.png',
fanart='https://s3.postimg.org/5s9jg2wtf/generos.png', url=host))
itemlist.append(
itemlist[-1].clone(title="Mas Vistas", action="lista", thumbnail='https://s9.postimg.org/wmhzu9d7z/vistas.png',
fanart='https://s9.postimg.org/wmhzu9d7z/vistas.png',
url=host + 'top-peliculas-online/1.html'))
itemlist.append(itemlist[-1].clone(title="Buscar", action="search",
thumbnail='https://s30.postimg.org/pei7txpa9/buscar.png',
fanart='https://s30.postimg.org/pei7txpa9/buscar.png', url=host + 'search/'))
return itemlist
def lista(item):
logger.info()
itemlist = []
duplicado = []
max_items = 24
next_page_url = ''
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|&nbsp;|<br>", "", data)
data = scrapertools.decodeHtmlentities(data)
patron = '"box_image_b.*?"><a href="([^"]+)" title=".*?><img src="([^"]+)" alt="(.*?)(\d{4}).*?"'
matches = re.compile(patron, re.DOTALL).findall(data)
if item.next_page != 'b':
if len(matches) > max_items:
next_page_url = item.url
matches = matches[:max_items]
next_page = 'b'
else:
matches = matches[max_items:]
next_page = 'a'
patron_next_page = '<a class="page dark gradient" href="([^"]+)">PROXIMO'
matches_next_page = re.compile(patron_next_page, re.DOTALL).findall(data)
if len(matches_next_page) > 0:
next_page_url = urlparse.urljoin(item.url, matches_next_page[0])
for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedyear in matches:
url = scrapedurl
thumbnail = scrapedthumbnail
contentTitle = re.sub(r"\(.*?\)|\/.*?|\(|\)|.*?\/|&excl;", "", scrapedtitle)
title = scrapertools.decodeHtmlentities(contentTitle) + '(' + scrapedyear + ')'
fanart = ''
plot = ''
if url not in duplicado:
itemlist.append(
Item(channel=item.channel, action='findvideos', title=title, url=url, thumbnail=thumbnail, plot=plot,
fanart=fanart, contentTitle=contentTitle, infoLabels={'year': scrapedyear}))
duplicado.append(url)
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
if next_page_url != '':
itemlist.append(Item(channel=item.channel, action="lista", title='Siguiente >>>', url=next_page_url,
thumbnail='https://s16.postimg.org/9okdu7hhx/siguiente.png', extra=item.extra,
next_page=next_page))
return itemlist
def generos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<li><a href="([^"]+)"><i class="fa fa-caret-right"><\/i> <strong>Películas de (.*?)<\/strong><\/a><\/li>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
url = scrapedurl
if scrapedtitle in tgenero:
thumbnail = tgenero[scrapedtitle]
else:
thumbnail = ''
title = scrapedtitle
fanart = ''
plot = ''
if title != 'Series':
itemlist.append(
Item(channel=item.channel, action='lista', title=title, url=url, thumbnail=thumbnail, plot=plot,
fanart=fanart))
return itemlist
def getinfo(page_url):
logger.info()
data = httptools.downloadpage(page_url).data
plot = scrapertools.find_single_match(data, '<\/em>\.(?:\s*|.)(.*?)\s*<\/p>')
info = plot
return info
def findvideos(item):
logger.info()
itemlist = []
info = getinfo(item.url)
data = httptools.downloadpage(item.url, headers=headers).data
patron = 'src="(.*?)" style="border:none;'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl in matches:
itemlist.extend(servertools.find_video_items(data=scrapedurl))
for videoitem in itemlist:
videoitem.title = item.contentTitle + ' (' + videoitem.server + ')'
videoitem.channel = item.channel
videoitem.plot = info
videoitem.action = "play"
videoitem.folder = False
if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'findvideos':
itemlist.append(
Item(channel=item.channel, title='[COLOR yellow]Añadir esta pelicula a la videoteca[/COLOR]', url=item.url,
action="add_pelicula_to_library", extra="findvideos", contentTitle=item.contentTitle))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "-")
item.url = item.url + texto
if texto != '':
return lista(item)
def newest(categoria):
logger.info()
itemlist = []
item = Item()
# categoria='peliculas'
try:
if categoria == 'peliculas':
item.url = host + 'page/1.html'
elif categoria == 'infantiles':
item.url = host + 'peliculas-de-genero/infantil/1.html'
itemlist = lista(item)
if itemlist[-1].title == 'Siguiente >>>':
itemlist.pop()
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist

19
kodi/channels/cinehindi.json Executable file
View File

@@ -0,0 +1,19 @@
{
"id": "cinehindi",
"name": "CineHindi",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "cinehindi.png",
"banner": "http://i.imgur.com/cau9TVe.png",
"version": 1,
"changes": [
{
"date": "25/05/2017",
"description": "Primera versión completa del canal"
}
],
"categories": [
"movie"
]
}

148
kodi/channels/cinehindi.py Executable file
View File

@@ -0,0 +1,148 @@
# -*- coding: UTF-8 -*-
import re
import urlparse
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
host = "http://www.cinehindi.com/"
def mainlist(item):
logger.info()
itemlist = list()
itemlist.append(Item(channel=item.channel, action="genero", title="Generos", url=host))
itemlist.append(Item(channel=item.channel, action="lista", title="Novedades", url=host))
itemlist.append(Item(channel=item.channel, action="proximas", title="Próximas Películas",
url=urlparse.urljoin(host, "proximamente")))
itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=urlparse.urljoin(host, "?s=")))
return itemlist
def genero(item):
logger.info()
itemlist = list()
data = httptools.downloadpage(host).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data)
patron_generos = '<ul id="menu-submenu" class=""><li id="menu-item-.+?"(.+)<\/li><\/ul>'
data_generos = scrapertools.find_single_match(data, patron_generos)
patron = 'class="menu-item menu-item-type-taxonomy menu-item-object-category menu-item-.*?"><a href="(.*?)">(.*?)<\/a><\/li>'
matches = scrapertools.find_multiple_matches(data_generos, patron)
for scrapedurl, scrapedtitle in matches:
if scrapedtitle != 'Próximas Películas':
itemlist.append(item.clone(action='lista', title=scrapedtitle, url=scrapedurl))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = item.url + texto
if texto != '':
return lista(item)
def proximas(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data) # Eliminamos tabuladores, dobles espacios saltos de linea, etc...
patron = 'class="item">.*?' # Todos los items de peliculas (en esta web) empiezan con esto
patron += '<a href="([^"]+).*?' # scrapedurl
patron += '<img src="([^"]+).*?' # scrapedthumbnail
patron += 'alt="([^"]+).*?' # scrapedtitle
patron += '<span class="player">.+?<span class="year">([^"]+)<\/span>' # scrapedyear
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedyear in matches:
if "ver" in scrapedurl:
scrapedtitle = scrapedtitle + " [" + scrapedyear + "]"
else:
scrapedtitle = scrapedtitle + " [" + scrapedyear + "]" + '(Proximamente)'
itemlist.append(item.clone(title=scrapedtitle, url=scrapedurl, action="findvideos", extra=scrapedtitle,
show=scrapedtitle, thumbnail=scrapedthumbnail, contentType="movie",
context=["buscar_trailer"]))
# Paginacion
patron_pag = '<a rel=.+?nofollow.+? class=.+?page larger.+? href=.+?(.+?)proximamente.+?>([^"]+)<\/a>'
pagina = scrapertools.find_multiple_matches(data, patron_pag)
for next_page_url, i in pagina:
if int(i) == 2:
item.url = next_page_url + 'proximamente/page/' + str(i) + '/'
itemlist.append(Item(channel=item.channel, action="proximas", title=">> Página siguiente", url=item.url,
thumbnail='https://s32.postimg.org/4zppxf5j9/siguiente.png'))
return itemlist
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;", "", data) # Eliminamos tabuladores, dobles espacios saltos de linea, etc...
patron = 'class="item">.*?' # Todos los items de peliculas (en esta web) empiezan con esto
patron += '<a href="([^"]+).*?' # scrapedurl
patron += '<img src="([^"]+).*?' # scrapedthumbnail
patron += 'alt="([^"]+).*?' # scrapedtitle
patron += '<span class="ttx">([^<]+).*?' # scrapedplot
patron += '<div class="fixyear">(.*?)</span></div></div>' # scrapedfixyear
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedplot, scrapedfixyear in matches:
patron = '<span class="year">([^<]+)' # scrapedyear
scrapedyear = scrapertools.find_single_match(scrapedfixyear, patron)
if scrapedyear:
scrapedtitle += ' (%s)' % (scrapedyear)
patron = '<span class="calidad2">([^<]+).*?' # scrapedquality
scrapedquality = scrapertools.find_single_match(scrapedfixyear, patron)
if scrapedquality:
scrapedtitle += ' [%s]' % (scrapedquality)
itemlist.append(
item.clone(title=scrapedtitle, url=scrapedurl, plot=scrapedplot, action="findvideos", extra=scrapedtitle,
show=scrapedtitle, thumbnail=scrapedthumbnail, contentType="movie", context=["buscar_trailer"]))
# Paginacion
patron_genero = '<h1>([^"]+)<\/h1>'
genero = scrapertools.find_single_match(data, patron_genero)
if genero == "Romance" or genero == "Drama":
patron = "<a rel='nofollow' class=previouspostslink' href='([^']+)'>Siguiente "
else:
patron = "<span class='current'>.+?href='(.+?)'>"
next_page_url = scrapertools.find_single_match(data, patron)
if next_page_url != "":
item.url = next_page_url
itemlist.append(Item(channel=item.channel, action="lista", title=">> Página siguiente", url=next_page_url,
thumbnail='https://s32.postimg.org/4zppxf5j9/siguiente.png'))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
itemlist.extend(servertools.find_video_items(data=data))
patron_show = '<div class="data"><h1 itemprop="name">([^<]+)<\/h1>'
show = scrapertools.find_single_match(data, patron_show)
for videoitem in itemlist:
videoitem.channel = item.channel
if config.get_videolibrary_support() and len(itemlist) > 0:
itemlist.append(
Item(channel=item.channel, title='[COLOR yellow]Añadir esta pelicula a la videoteca[/COLOR]', url=item.url,
action="add_pelicula_to_library", extra="findvideos", contentTitle=show))
return itemlist

23
kodi/channels/cinetemagay.json Executable file
View File

@@ -0,0 +1,23 @@
{
"id": "cinetemagay",
"name": "Cinetemagay",
"active": true,
"adult": true,
"language": "es",
"thumbnail": "cinetemagay.png",
"banner": "cinetemagay.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "05/08/2016",
"description": "Eliminado de sección películas"
}
],
"categories": [
"adult"
]
}

128
kodi/channels/cinetemagay.py Executable file
View File

@@ -0,0 +1,128 @@
# -*- coding: utf-8 -*-
import os
import re
from core import config
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
IMAGES_PATH = os.path.join(config.get_runtime_path(), 'resources', 'images', 'cinetemagay')
def strip_tags(value):
return re.sub(r'<[^>]*?>', '', value)
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, action="lista", title="Cine gay latinoamericano",
url="http://cinegaylatinoamericano.blogspot.com.es/feeds/posts/default/?max-results=100&start-index=1",
thumbnail="http://www.americaeconomia.com/sites/default/files/imagecache/foto_nota/homosexual1.jpg"))
itemlist.append(Item(channel=item.channel, action="lista", title="Cine y cortos gay",
url="http://cineycortosgay.blogspot.com.es/feeds/posts/default/?max-results=100&start-index=1",
thumbnail="http://www.elmolar.org/wp-content/uploads/2015/05/cortometraje.jpg"))
itemlist.append(Item(channel=item.channel, action="lista", title="Cine gay online (México)",
url="http://cinegayonlinemexico.blogspot.com.es/feeds/posts/default/?max-results=100&start-index=1",
thumbnail="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTmmqL6tS2Ced1VoxlGQT0q-ibPEz1DCV3E1waHFDI5KT0pg1lJ"))
itemlist.append(Item(channel=item.channel, action="lista", title="Sentido gay",
url="http://www.sentidogay.blogspot.com.es//feeds/posts/default/?max-results=100&start-index=1",
thumbnail="http://1.bp.blogspot.com/-epOPgDD_MQw/VPGZGQOou1I/AAAAAAAAAkI/lC25GrukDuo/s1048/SentidoGay.jpg"))
itemlist.append(Item(channel=item.channel, action="lista", title="PGPA",
url="http://pgpa.blogspot.com.es/feeds/posts/default/?max-results=100&start-index=1",
thumbnail="http://themes.googleusercontent.com/image?id=0BwVBOzw_-hbMNTRlZjk2YWMtYTVlMC00ZjZjLWI3OWEtMWEzZDEzYWVjZmQ4"))
return itemlist
def lista(item):
logger.info()
itemlist = []
# Descarga la pagina
data = scrapertools.cache_page(item.url)
# Extrae las entradas (carpetas)
patronvideos = '&lt;img .*?src=&quot;(.*?)&quot;'
patronvideos += "(.*?)<link rel='alternate' type='text/html' href='([^']+)' title='([^']+)'.*?>"
matches = re.compile(patronvideos, re.DOTALL).findall(data)
for match in matches:
scrapedtitle = match[3]
scrapedtitle = scrapedtitle.replace("&apos;", "'")
scrapedtitle = scrapedtitle.replace("&quot;", "'")
scrapedtitle = scrapedtitle.replace("&amp;amp;", "'")
scrapedtitle = scrapedtitle.replace("&amp;#39;", "'")
scrapedurl = match[2]
scrapedthumbnail = match[0]
imagen = ""
scrapedplot = match[1]
tipo = match[1]
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
scrapedplot = "<" + scrapedplot
scrapedplot = scrapedplot.replace("&gt;", ">")
scrapedplot = scrapedplot.replace("&lt;", "<")
scrapedplot = scrapedplot.replace("</div>", "\n")
scrapedplot = scrapedplot.replace("<br />", "\n")
scrapedplot = scrapedplot.replace("&amp;", "")
scrapedplot = scrapedplot.replace("nbsp;", "")
scrapedplot = strip_tags(scrapedplot)
itemlist.append(
Item(channel=item.channel, action="detail", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
plot=scrapedurl + scrapedplot, folder=True))
variable = item.url.split("index=")[1]
variable = int(variable)
variable += 100
variable = str(variable)
variable_url = item.url.split("index=")[0]
url_nueva = variable_url + "index=" + variable
itemlist.append(
Item(channel=item.channel, action="lista", title="Ir a la página siguiente (desde " + variable + ")",
url=url_nueva, thumbnail="", plot="Pasar a la página siguiente (en grupos de 100)\n\n" + url_nueva))
return itemlist
def detail(item):
logger.info()
itemlist = []
# Descarga la pagina
data = scrapertools.cachePage(item.url)
data = data.replace("%3A", ":")
data = data.replace("%2F", "/")
data = data.replace("%3D", "=")
data = data.replace("%3", "?")
data = data.replace("%26", "&")
descripcion = ""
plot = ""
patrondescrip = 'SINOPSIS:(.*?)'
matches = re.compile(patrondescrip, re.DOTALL).findall(data)
if len(matches) > 0:
descripcion = matches[0]
descripcion = descripcion.replace("&nbsp;", "")
descripcion = descripcion.replace("<br/>", "")
descripcion = descripcion.replace("\r", "")
descripcion = descripcion.replace("\n", " ")
descripcion = descripcion.replace("\t", " ")
descripcion = re.sub("<[^>]+>", " ", descripcion)
descripcion = descripcion
try:
plot = unicode(descripcion, "utf-8").encode("iso-8859-1")
except:
plot = descripcion
# Busca los enlaces a los videos de servidores
video_itemlist = servertools.find_video_items(data=data)
for video_item in video_itemlist:
itemlist.append(Item(channel=item.channel, action="play", server=video_item.server,
title=item.title + " " + video_item.title, url=video_item.url, thumbnail=item.thumbnail,
plot=video_item.url, folder=False))
return itemlist

124
kodi/channels/cinetux.json Executable file
View File

@@ -0,0 +1,124 @@
{
"id": "cinetux",
"name": "Cinetux",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "cinetux.png",
"banner": "cinetux.png",
"fanart": "cinetux.jpg",
"version": 1,
"changes": [
{
"date": "12/05/2017",
"description": "Arreglada paginación y enlaces directos"
},
{
"date": "16/02/2017",
"description": "Adaptado a httptools y añadidos enlaces directos"
},
{
"date": "08/07/2016",
"description": "Correciones y adaptaciones a la nueva version"
}
],
"categories": [
"latino",
"movie"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Películas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_documentales",
"type": "bool",
"label": "Incluir en Novedades - Documentales",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_infantiles",
"type": "bool",
"label": "Incluir en Novedades - Infantiles",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "modo_grafico",
"type": "bool",
"label": "Buscar información extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": [
"Perfil 3",
"Perfil 2",
"Perfil 1"
]
},
{
"id": "filterlanguages",
"type": "list",
"label": "Mostrar enlaces en idioma...",
"default": 3,
"enabled": true,
"visible": true,
"lvalues": [
"VOSE",
"Latino",
"Español",
"No filtrar"
]
},
{
"id": "filterlinks",
"type": "list",
"label": "Mostrar enlaces de tipo...",
"default": 2,
"enabled": true,
"visible": true,
"lvalues": [
"Solo Descarga",
"Solo Online",
"No filtrar"
]
},
{
"id": "viewmode",
"type": "list",
"label": "Elegir vista por defecto (Confluence)...",
"default": 2,
"enabled": true,
"visible": true,
"lvalues": [
"Sinopsis",
"Miniatura",
"Lista"
]
}
]
}

354
kodi/channels/cinetux.py Executable file
View File

@@ -0,0 +1,354 @@
# -*- coding: utf-8 -*-
import urlparse
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core import tmdb
from core.item import Item
CHANNEL_HOST = "http://www.cinetux.net/"
# Configuracion del canal
__modo_grafico__ = config.get_setting('modo_grafico', 'cinetux')
__perfil__ = config.get_setting('perfil', 'cinetux')
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE']]
color1, color2, color3 = perfil[__perfil__]
viewmode_options = {0: 'movie_with_plot', 1: 'movie', 2: 'list'}
viewmode = viewmode_options[config.get_setting('viewmode', 'cinetux')]
def mainlist(item):
logger.info()
itemlist = []
item.viewmode = viewmode
itemlist.append(item.clone(title="Películas", text_color=color2, action="", text_bold=True))
itemlist.append(item.clone(action="peliculas", title=" Novedades", url=CHANNEL_HOST,
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Directors%20Chair.png",
text_color=color1))
itemlist.append(item.clone(action="vistas", title=" Más vistas", url="http://www.cinetux.net/mas-vistos/",
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Favorites.png",
text_color=color1))
itemlist.append(item.clone(action="idioma", title=" Por idioma", text_color=color1))
itemlist.append(item.clone(action="generos", title=" Por géneros", url=CHANNEL_HOST,
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Genre.png",
text_color=color1))
url = urlparse.urljoin(CHANNEL_HOST, "genero/documental/")
itemlist.append(item.clone(title="Documentales", text_bold=True, text_color=color2, action=""))
itemlist.append(item.clone(action="peliculas", title=" Novedades", url=url, text_color=color1,
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Documentaries.png"))
url = urlparse.urljoin(CHANNEL_HOST, "genero/documental/?orderby=title&order=asc&gdsr_order=asc")
itemlist.append(item.clone(action="peliculas", title=" Por orden alfabético", text_color=color1, url=url,
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/A-Z.png"))
itemlist.append(item.clone(title="", action=""))
itemlist.append(item.clone(action="search", title="Buscar...", text_color=color3))
itemlist.append(item.clone(action="configuracion", title="Configurar canal...", text_color="gold", folder=False))
return itemlist
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
platformtools.itemlist_refresh()
return ret
def search(item, texto):
logger.info()
item.url = "http://www.cinetux.net/?s="
texto = texto.replace(" ", "+")
item.url = item.url + texto
try:
return peliculas(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == 'peliculas':
item.url = CHANNEL_HOST
item.action = "peliculas"
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":
itemlist.pop()
elif categoria == 'documentales':
item.url = urlparse.urljoin(CHANNEL_HOST, "genero/documental/")
item.action = "peliculas"
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":
itemlist.pop()
elif categoria == 'infantiles':
item.url = urlparse.urljoin(CHANNEL_HOST, "genero/infantil/")
item.action = "peliculas"
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":
itemlist.pop()
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def peliculas(item):
logger.info()
itemlist = []
item.text_color = color2
# Descarga la página
data = httptools.downloadpage(item.url).data
# Extrae las entradas (carpetas)
patron = '<div class="item">.*?<div class="audio">\s*([^<]*)<.*?href="([^"]+)".*?src="([^"]+)"' \
'.*?<h3 class="name"><a.*?>([^<]+)</a>'
matches = scrapertools.find_multiple_matches(data, patron)
for calidad, scrapedurl, scrapedthumbnail, scrapedtitle in matches:
try:
fulltitle, year = scrapedtitle.rsplit("(", 1)
year = scrapertools.get_match(year, '(\d{4})')
if "/" in fulltitle:
fulltitle = fulltitle.split(" /", 1)[0]
scrapedtitle = "%s (%s)" % (fulltitle, year)
except:
fulltitle = scrapedtitle
year = ""
if calidad:
scrapedtitle += " [%s]" % calidad
new_item = item.clone(action="findvideos", title=scrapedtitle, fulltitle=fulltitle,
url=scrapedurl, thumbnail=scrapedthumbnail,
contentTitle=fulltitle, contentType="movie")
if year:
new_item.infoLabels['year'] = int(year)
itemlist.append(new_item)
try:
tmdb.set_infoLabels(itemlist, __modo_grafico__)
except:
pass
# Extrae el paginador
next_page_link = scrapertools.find_single_match(data, '<a href="([^"]+)"\s*><span [^>]+>&raquo;</span>')
if next_page_link:
itemlist.append(item.clone(action="peliculas", title=">> Página siguiente", url=next_page_link,
text_color=color3))
return itemlist
def vistas(item):
logger.info()
itemlist = []
item.text_color = color2
# Descarga la página
data = httptools.downloadpage(item.url).data
# Extrae las entradas (carpetas)
patron = '<li class="item">.*?href="([^"]+)".*?src="([^"]+)"' \
'.*?<h3 class="name"><a.*?>([^<]+)</a>'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
new_item = item.clone(action="findvideos", title=scrapedtitle, fulltitle=scrapedtitle,
url=scrapedurl, thumbnail=scrapedthumbnail,
contentTitle=scrapedtitle, contentType="movie")
itemlist.append(new_item)
# Extrae el paginador
next_page_link = scrapertools.find_single_match(data, '<a href="([^"]+)"\s+><span [^>]+>&raquo;</span>')
if next_page_link:
itemlist.append(item.clone(action="vistas", title=">> Página siguiente", url=next_page_link, text_color=color3))
return itemlist
def generos(item):
logger.info()
itemlist = []
# Descarga la página
data = httptools.downloadpage(item.url).data
bloque = scrapertools.find_single_match(data, '<div class="sub_title">Géneros</div>(.*?)</ul>')
# Extrae las entradas
patron = '<li><a href="([^"]+)">(.*?)</li>'
matches = scrapertools.find_multiple_matches(bloque, patron)
for scrapedurl, scrapedtitle in matches:
scrapedtitle = scrapertools.htmlclean(scrapedtitle).strip()
scrapedtitle = unicode(scrapedtitle, "utf8").capitalize().encode("utf8")
if scrapedtitle == "Erotico" and config.get_setting("adult_mode") == '0':
continue
itemlist.append(item.clone(action="peliculas", title=scrapedtitle, url=scrapedurl))
return itemlist
def idioma(item):
logger.info()
itemlist = []
itemlist.append(item.clone(action="peliculas", title="Español", url="http://www.cinetux.net/idioma/espanol/"))
itemlist.append(item.clone(action="peliculas", title="Latino", url="http://www.cinetux.net/idioma/latino/"))
itemlist.append(item.clone(action="peliculas", title="VOSE", url="http://www.cinetux.net/idioma/subtitulado/"))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
try:
filtro_idioma = config.get_setting("filterlanguages", item.channel)
filtro_enlaces = config.get_setting("filterlinks", item.channel)
except:
filtro_idioma = 3
filtro_enlaces = 2
dict_idiomas = {'Español': 2, 'Latino': 1, 'Subtitulado': 0}
# Busca el argumento
data = httptools.downloadpage(item.url).data
year = scrapertools.find_single_match(data, '<h1><span>.*?rel="tag">([^<]+)</a>')
if year and item.extra != "library":
item.infoLabels['year'] = int(year)
# Ampliamos datos en tmdb
if not item.infoLabels['plot']:
try:
tmdb.set_infoLabels(item, __modo_grafico__)
except:
pass
if not item.infoLabels.get('plot'):
plot = scrapertools.find_single_match(data, '<div class="sinopsis"><p>(.*?)</p>')
item.infoLabels['plot'] = plot
if filtro_enlaces != 0:
list_enlaces = bloque_enlaces(data, filtro_idioma, dict_idiomas, "online", item)
if list_enlaces:
itemlist.append(item.clone(action="", title="Enlaces Online", text_color=color1,
text_bold=True))
itemlist.extend(list_enlaces)
if filtro_enlaces != 1:
list_enlaces = bloque_enlaces(data, filtro_idioma, dict_idiomas, "descarga", item)
if list_enlaces:
itemlist.append(item.clone(action="", title="Enlaces Descarga", text_color=color1,
text_bold=True))
itemlist.extend(list_enlaces)
if itemlist:
itemlist.append(item.clone(channel="trailertools", title="Buscar Tráiler", action="buscartrailer", context="",
text_color="magenta"))
# Opción "Añadir esta película a la videoteca de XBMC"
if item.extra != "library":
if config.get_videolibrary_support():
itemlist.append(Item(channel=item.channel, title="Añadir a la videoteca", text_color="green",
filtro=True, action="add_pelicula_to_library", url=item.url,
infoLabels={'title': item.fulltitle}, fulltitle=item.fulltitle,
extra="library"))
else:
itemlist.append(item.clone(title="No hay enlaces disponibles", action="", text_color=color3))
return itemlist
def bloque_enlaces(data, filtro_idioma, dict_idiomas, type, item):
logger.info()
lista_enlaces = []
matches = []
if type == "online":
patron = '<a href="#([^"]+)" data-toggle="tab">([^<]+)</a>'
bloques = scrapertools.find_multiple_matches(data, patron)
for id, language in bloques:
patron = 'id="' + id + '">.*?<iframe src="([^"]+)"'
url = scrapertools.find_single_match(data, patron)
matches.append([url, "", language])
bloque2 = scrapertools.find_single_match(data, '<div class="table-link" id="%s">(.*?)</table>' % type)
patron = 'tr>[^<]+<td>.*?href="([^"]+)".*?src.*?title="([^"]+)"' \
'.*?src.*?title="([^"]+)".*?src.*?title="(.*?)"'
matches.extend(scrapertools.find_multiple_matches(bloque2, patron))
filtrados = []
for match in matches:
scrapedurl = match[0]
language = match[2].strip()
title = " Mirror en %s (" + language + ")"
if len(match) == 4:
title += " (Calidad " + match[3].strip() + ")"
if filtro_idioma == 3 or item.filtro:
lista_enlaces.append(item.clone(title=title, action="play", text_color=color2,
url=scrapedurl, idioma=language, extra=item.url))
else:
idioma = dict_idiomas[language]
if idioma == filtro_idioma:
lista_enlaces.append(item.clone(title=title, text_color=color2, action="play", url=scrapedurl,
extra=item.url))
else:
if language not in filtrados:
filtrados.append(language)
lista_enlaces = servertools.get_servers_itemlist(lista_enlaces, lambda i: i.title % i.server)
if filtro_idioma != 3:
if len(filtrados) > 0:
title = "Mostrar enlaces filtrados en %s" % ", ".join(filtrados)
lista_enlaces.append(item.clone(title=title, action="findvideos", url=item.url, text_color=color3,
filtro=True))
return lista_enlaces
def play(item):
logger.info()
itemlist = []
if "api.cinetux" in item.url:
data = httptools.downloadpage(item.url, headers={'Referer': item.extra}).data.replace("\\", "")
bloque = scrapertools.find_single_match(data, 'sources:\s*(\[.*?\])')
if bloque:
bloque = eval(bloque)
video_urls = []
for b in bloque:
ext = b["type"].replace("video/", "")
video_urls.append([".%s %sp [directo]" % (ext, b["label"]), b["file"], b["label"]])
video_urls.sort(key=lambda vdu: vdu[2])
for v in video_urls:
itemlist.append([v[0], v[1]])
else:
return [item]
return itemlist

63
kodi/channels/clasicofilm.json Executable file
View File

@@ -0,0 +1,63 @@
{
"id": "clasicofilm",
"name": "ClasicoFilm",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://i.imgur.com/F7sevVu.jpg?1",
"banner": "clasicofilm.png",
"version": 1,
"changes": [
{
"date": "28/05/2017",
"description": "Corregido findvideos"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "07/02/17",
"description": "Fix bug in newest"
},
{
"date": "09/01/2017",
"description": "Primera version"
}
],
"categories": [
"movie"
],
"settings": [
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Películas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "modo_grafico",
"type": "bool",
"label": "Buscar información extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 3,
"enabled": true,
"visible": true,
"lvalues": [
"Sin color",
"Perfil 3",
"Perfil 2",
"Perfil 1"
]
}
]
}

256
kodi/channels/clasicofilm.py Executable file
View File

@@ -0,0 +1,256 @@
# -*- coding: utf-8 -*-
import re
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import tmdb
from core.item import Item
host = "http://www.clasicofilm.com/"
# Configuracion del canal
__modo_grafico__ = config.get_setting('modo_grafico', 'clasicofilm')
__perfil__ = config.get_setting('perfil', 'clasicofilm')
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE']]
if __perfil__ - 1 >= 0:
color1, color2, color3 = perfil[__perfil__ - 1]
else:
color1 = color2 = color3 = ""
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Películas", text_color=color2, action="", text_bold=True))
itemlist.append(item.clone(action="peliculas", title=" Novedades",
url="http://www.clasicofilm.com/feeds/posts/summary?start-index=1&max-results=20&alt=json-in-script&callback=finddatepost",
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Directors%20Chair.png",
text_color=color1))
itemlist.append(item.clone(action="generos", title=" Por géneros", url=host,
thumbnail="https://raw.githubusercontent.com/master-1970/resources/master/images/genres"
"/0/Genre.png",
text_color=color1))
itemlist.append(item.clone(title="", action=""))
itemlist.append(item.clone(action="search", title="Buscar...", text_color=color3))
itemlist.append(item.clone(action="configuracion", title="Configurar canal...", text_color="gold", folder=False))
return itemlist
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
platformtools.itemlist_refresh()
return ret
def search(item, texto):
logger.info()
data = httptools.downloadpage(host).data
cx = scrapertools.find_single_match(data, "var cx = '([^']+)'")
texto = texto.replace(" ", "%20")
item.url = "https://www.googleapis.com/customsearch/v1element?key=AIzaSyCVAXiUzRYsML1Pv6RwSG1gunmMikTzQqY&rsz=filtered_cse&num=20&hl=es&sig=0c3990ce7a056ed50667fe0c3873c9b6&cx=%s&q=%s&sort=&googlehost=www.google.com&start=0" % (
cx, texto)
try:
return busqueda(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def newest(categoria):
logger.info()
itemlist = []
item = Item()
try:
if categoria == 'peliculas':
item.url = "http://www.clasicofilm.com/feeds/posts/summary?start-index=1&max-results=20&alt=json-in-script&callback=finddatepost"
item.action = "peliculas"
itemlist = peliculas(item)
if itemlist[-1].action == "peliculas":
itemlist.pop()
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def peliculas(item):
logger.info()
itemlist = []
item.text_color = color2
# Descarga la página
data = httptools.downloadpage(item.url).data
data = scrapertools.find_single_match(data, 'finddatepost\((\{.*?\]\}\})\);')
from core import jsontools
data = jsontools.load(data)["feed"]
for entry in data["entry"]:
for link in entry["link"]:
if link["rel"] == "alternate":
title = link["title"]
url = link["href"]
break
thumbnail = entry["media$thumbnail"]["url"].replace("s72-c/", "")
try:
title_split = re.split(r"\s*\((\d)", title, 1)
year = title_split[1] + scrapertools.find_single_match(title_split[2], '(\d{3})\)')
fulltitle = title_split[0]
except:
fulltitle = title
year = ""
if not "DVD" in title and not "HDTV" in title and not "HD-" in title:
continue
infolabels = {'year': year}
new_item = item.clone(action="findvideos", title=title, fulltitle=fulltitle,
url=url, thumbnail=thumbnail, infoLabels=infolabels,
contentTitle=fulltitle, contentType="movie")
itemlist.append(new_item)
try:
tmdb.set_infoLabels(itemlist, __modo_grafico__)
except:
pass
actualpage = int(scrapertools.find_single_match(item.url, 'start-index=(\d+)'))
totalresults = int(data["openSearch$totalResults"]["$t"])
if actualpage + 20 < totalresults:
url_next = item.url.replace("start-index=" + str(actualpage), "start-index=" + str(actualpage + 20))
itemlist.append(Item(channel=item.channel, action=item.action, title=">> Página Siguiente", url=url_next))
return itemlist
def busqueda(item):
logger.info()
itemlist = []
item.text_color = color2
# Descarga la página
data = httptools.downloadpage(item.url).data
from core import jsontools
data = jsontools.load(data)
for entry in data["results"]:
try:
title = entry["richSnippet"]["metatags"]["ogTitle"]
url = entry["richSnippet"]["metatags"]["ogUrl"]
thumbnail = entry["richSnippet"]["metatags"]["ogImage"]
except:
continue
try:
title_split = re.split(r"\s*\((\d)", title, 1)
year = title_split[1] + scrapertools.find_single_match(title_split[2], '(\d{3})\)')
fulltitle = title_split[0]
except:
fulltitle = title
year = ""
if not "DVD" in title and not "HDTV" in title and not "HD-" in title:
continue
infolabels = {'year': year}
new_item = item.clone(action="findvideos", title=title, fulltitle=fulltitle,
url=url, thumbnail=thumbnail, infoLabels=infolabels,
contentTitle=fulltitle, contentType="movie")
itemlist.append(new_item)
try:
tmdb.set_infoLabels(itemlist, __modo_grafico__)
except:
pass
actualpage = int(scrapertools.find_single_match(item.url, 'start=(\d+)'))
totalresults = int(data["cursor"]["resultCount"])
if actualpage + 20 <= totalresults:
url_next = item.url.replace("start=" + str(actualpage), "start=" + str(actualpage + 20))
itemlist.append(Item(channel=item.channel, action="busqueda", title=">> Página Siguiente", url=url_next))
return itemlist
def generos(item):
logger.info()
itemlist = []
# Descarga la página
data = httptools.downloadpage(item.url).data
patron = '<b>([^<]+)</b><br/>\s*<script src="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedtitle, scrapedurl in matches:
scrapedurl = scrapedurl.replace("max-results=500", "start-index=1&max-results=20") \
.replace("recentpostslist", "finddatepost")
itemlist.append(Item(channel=item.channel, action="peliculas", title=scrapedtitle, url=scrapedurl,
thumbnail=item.thumbnail, text_color=color3))
itemlist.sort(key=lambda x: x.title)
return itemlist
def findvideos(item):
from core import servertools
if item.infoLabels["tmdb_id"]:
tmdb.set_infoLabels_item(item, __modo_grafico__)
data = httptools.downloadpage(item.url).data
iframe = scrapertools.find_single_match(data, '<iframe src="([^"]+)"')
if "goo.gl/" in iframe:
data += httptools.downloadpage(iframe, follow_redirects=False, only_headers=True).headers.get("location", "")
itemlist = servertools.find_video_items(item, data)
library_path = config.get_videolibrary_path()
if config.get_videolibrary_support():
title = "Añadir película a la videoteca"
if item.infoLabels["imdb_id"] and not library_path.lower().startswith("smb://"):
try:
from core import filetools
movie_path = filetools.join(config.get_videolibrary_path(), 'CINE')
files = filetools.walk(movie_path)
for dirpath, dirname, filename in files:
for f in filename:
if item.infoLabels["imdb_id"] in f and f.endswith(".nfo"):
from core import videolibrarytools
head_nfo, it = videolibrarytools.read_nfo(filetools.join(dirpath, dirname, f))
canales = it.library_urls.keys()
canales.sort()
if "clasicofilm" in canales:
canales.pop(canales.index("clasicofilm"))
canales.insert(0, "[COLOR red]clasicofilm[/COLOR]")
title = "Película ya en tu videoteca. [%s] ¿Añadir?" % ",".join(canales)
break
except:
import traceback
logger.error(traceback.format_exc())
itemlist.append(item.clone(action="add_pelicula_to_library", title=title))
token_auth = config.get_setting("token_trakt", "tvmoviedb")
if token_auth and item.infoLabels["tmdb_id"]:
itemlist.append(item.clone(channel="tvmoviedb", title="[Trakt] Gestionar con tu cuenta", action="menu_trakt",
extra="movie"))
return itemlist

85
kodi/channels/copiapop.json Executable file
View File

@@ -0,0 +1,85 @@
{
"id": "copiapop",
"name": "Copiapop/Diskokosmiko",
"language": "es",
"active": true,
"adult": false,
"version": 1,
"changes": [
{
"date": "15/03/2017",
"autor": "SeiTaN",
"description": "limpieza código"
},
{
"date": "16/02/2017",
"autor": "Cmos",
"description": "Primera versión"
}
],
"thumbnail": "http://i.imgur.com/EjbfM7p.png?1",
"banner": "copiapop.png",
"categories": [
"movie",
"tvshow"
],
"settings": [
{
"id": "copiapopuser",
"type": "text",
"color": "0xFF25AA48",
"label": "Usuario Copiapop",
"enabled": true,
"visible": true
},
{
"id": "copiapoppassword",
"type": "text",
"color": "0xFF25AA48",
"hidden": true,
"label": "Password Copiapop",
"enabled": "!eq(-1,'')",
"visible": true
},
{
"id": "diskokosmikouser",
"type": "text",
"color": "0xFFC52020",
"label": "Usuario Diskokosmiko",
"enabled": true,
"visible": true
},
{
"id": "diskokosmikopassword",
"type": "text",
"color": "0xFFC52020",
"hidden": true,
"label": "Password Diskokosmiko",
"enabled": "!eq(-1,'')",
"visible": true
},
{
"id": "adult_content",
"type": "bool",
"color": "0xFFd50b0b",
"label": "Mostrar contenido adulto en las búsquedas",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 3,
"enabled": true,
"visible": true,
"lvalues": [
"Sin color",
"Perfil 3",
"Perfil 2",
"Perfil 1"
]
}
]
}

427
kodi/channels/copiapop.py Executable file
View File

@@ -0,0 +1,427 @@
# -*- coding: utf-8 -*-
import re
import threading
from core import config
from core import filetools
from core import httptools
from core import logger
from core import scrapertools
from core.item import Item
__perfil__ = config.get_setting('perfil', "copiapop")
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00', '0xFFFE2E2E', '0xFF088A08'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E', '0xFFFE2E2E', '0xFF088A08'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE', '0xFFFE2E2E', '0xFF088A08']]
if __perfil__ - 1 >= 0:
color1, color2, color3, color4, color5 = perfil[__perfil__ - 1]
else:
color1 = color2 = color3 = color4 = color5 = ""
adult_content = config.get_setting("adult_content", "copiapop")
def login(pagina):
logger.info()
try:
user = config.get_setting("%suser" % pagina.split(".")[0], "copiapop")
password = config.get_setting("%spassword" % pagina.split(".")[0], "copiapop")
if pagina == "copiapop.com":
if user == "" and password == "":
return False, "Para ver los enlaces de copiapop es necesario registrarse en copiapop.com"
elif user == "" or password == "":
return False, "Copiapop: Usuario o contraseña en blanco. Revisa tus credenciales"
else:
if user == "" or password == "":
return False, "DiskoKosmiko: Usuario o contraseña en blanco. Revisa tus credenciales"
data = httptools.downloadpage("http://%s" % pagina).data
if re.search(r'(?i)%s' % user, data):
return True, ""
token = scrapertools.find_single_match(data, 'name="__RequestVerificationToken".*?value="([^"]+)"')
post = "__RequestVerificationToken=%s&UserName=%s&Password=%s" % (token, user, password)
headers = {'X-Requested-With': 'XMLHttpRequest'}
url_log = "http://%s/action/Account/Login" % pagina
data = httptools.downloadpage(url_log, post, headers).data
if "redirectUrl" in data:
logger.info("Login correcto")
return True, ""
else:
logger.error("Error en el login")
return False, "Nombre de usuario no válido. Comprueba tus credenciales"
except:
import traceback
logger.error(traceback.format_exc())
return False, "Error durante el login. Comprueba tus credenciales"
def mainlist(item):
logger.info()
itemlist = []
item.text_color = color1
logueado, error_message = login("copiapop.com")
if not logueado:
itemlist.append(item.clone(title=error_message, action="configuracion", folder=False))
else:
item.extra = "http://copiapop.com"
itemlist.append(item.clone(title="Copiapop", action="", text_color=color2))
itemlist.append(
item.clone(title=" Búsqueda", action="search", url="http://copiapop.com/action/SearchFiles"))
itemlist.append(item.clone(title=" Colecciones", action="colecciones",
url="http://copiapop.com/action/home/MoreNewestCollections?pageNumber=1"))
itemlist.append(item.clone(title=" Búsqueda personalizada", action="filtro",
url="http://copiapop.com/action/SearchFiles"))
itemlist.append(item.clone(title=" Mi cuenta", action="cuenta"))
item.extra = "http://diskokosmiko.mx/"
itemlist.append(item.clone(title="DiskoKosmiko", action="", text_color=color2))
itemlist.append(item.clone(title=" Búsqueda", action="search", url="http://diskokosmiko.mx/action/SearchFiles"))
itemlist.append(item.clone(title=" Colecciones", action="colecciones",
url="http://diskokosmiko.mx/action/home/MoreNewestCollections?pageNumber=1"))
itemlist.append(item.clone(title=" Búsqueda personalizada", action="filtro",
url="http://diskokosmiko.mx/action/SearchFiles"))
itemlist.append(item.clone(title=" Mi cuenta", action="cuenta"))
itemlist.append(item.clone(action="", title=""))
folder_thumb = filetools.join(config.get_data_path(), 'thumbs_copiapop')
files = filetools.listdir(folder_thumb)
if files:
itemlist.append(
item.clone(title="Eliminar caché de imágenes (%s)" % len(files), action="delete_cache", text_color="red"))
itemlist.append(item.clone(title="Configuración del canal", action="configuracion", text_color="gold"))
return itemlist
def search(item, texto):
logger.info()
item.post = "Mode=List&Type=Video&Phrase=%s&SizeFrom=0&SizeTo=0&Extension=&ref=pager&pageNumber=1" % texto.replace(
" ", "+")
try:
return listado(item)
except:
import sys, traceback
for line in sys.exc_info():
logger.error("%s" % line)
logger.error(traceback.format_exc())
return []
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
platformtools.itemlist_refresh()
return ret
def listado(item):
logger.info()
itemlist = []
data_thumb = httptools.downloadpage(item.url, item.post.replace("Mode=List", "Mode=Gallery")).data
if not item.post:
data_thumb = ""
item.url = item.url.replace("/gallery,", "/list,")
data = httptools.downloadpage(item.url, item.post).data
data = re.sub(r"\n|\r|\t|\s{2}|&nbsp;|<br>", "", data)
folder = filetools.join(config.get_data_path(), 'thumbs_copiapop')
patron = '<div class="size">(.*?)</div></div></div>'
bloques = scrapertools.find_multiple_matches(data, patron)
for block in bloques:
if "adult_info" in block and not adult_content:
continue
size = scrapertools.find_single_match(block, '<p>([^<]+)</p>')
scrapedurl, scrapedtitle = scrapertools.find_single_match(block,
'<div class="name"><a href="([^"]+)".*?>([^<]+)<')
scrapedthumbnail = scrapertools.find_single_match(block, "background-image:url\('([^']+)'")
if scrapedthumbnail:
try:
thumb = scrapedthumbnail.split("-", 1)[0].replace("?", "\?")
if data_thumb:
url_thumb = scrapertools.find_single_match(data_thumb, "(%s[^']+)'" % thumb)
else:
url_thumb = scrapedthumbnail
scrapedthumbnail = filetools.join(folder, "%s.jpg" % url_thumb.split("e=", 1)[1][-20:])
except:
scrapedthumbnail = ""
if scrapedthumbnail:
t = threading.Thread(target=download_thumb, args=[scrapedthumbnail, url_thumb])
t.setDaemon(True)
t.start()
else:
scrapedthumbnail = item.extra + "/img/file_types/gallery/movie.png"
scrapedurl = item.extra + scrapedurl
title = "%s (%s)" % (scrapedtitle, size)
if "adult_info" in block:
title += " [COLOR %s][+18][/COLOR]" % color4
plot = scrapertools.find_single_match(block, '<div class="desc">(.*?)</div>')
if plot:
plot = scrapertools.decodeHtmlentities(plot)
new_item = Item(channel=item.channel, action="findvideos", title=title, url=scrapedurl,
thumbnail=scrapedthumbnail, contentTitle=scrapedtitle, text_color=color2,
extra=item.extra, infoLabels={'plot': plot}, post=item.post)
if item.post:
try:
new_item.folderurl, new_item.foldername = scrapertools.find_single_match(block,
'<p class="folder"><a href="([^"]+)".*?>([^<]+)<')
except:
pass
else:
new_item.folderurl = item.url.rsplit("/", 1)[0]
new_item.foldername = item.foldername
new_item.fanart = item.thumbnail
itemlist.append(new_item)
next_page = scrapertools.find_single_match(data, '<div class="pageSplitterBorder" data-nextpage-number="([^"]+)"')
if next_page:
if item.post:
post = re.sub(r'pageNumber=(\d+)', "pageNumber=" + next_page, item.post)
url = item.url
else:
url = re.sub(r',\d+\?ref=pager', ",%s?ref=pager" % next_page, item.url)
post = ""
itemlist.append(Item(channel=item.channel, action="listado", title=">> Página Siguiente (%s)" % next_page,
url=url, post=post, extra=item.extra))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
itemlist.append(item.clone(action="play", title="Reproducir/Descargar", server="copiapop"))
usuario = scrapertools.find_single_match(item.url, '%s/([^/]+)/' % item.extra)
url_usuario = item.extra + "/" + usuario
if item.folderurl and not item.folderurl.startswith(item.extra):
item.folderurl = item.extra + item.folderurl
if item.post:
itemlist.append(item.clone(action="listado", title="Ver colección: %s" % item.foldername,
url=item.folderurl + "/gallery,1,1?ref=pager", post=""))
data = httptools.downloadpage(item.folderurl).data
token = scrapertools.find_single_match(data,
'data-action="followChanged.*?name="__RequestVerificationToken".*?value="([^"]+)"')
collection_id = item.folderurl.rsplit("-", 1)[1]
post = "__RequestVerificationToken=%s&collectionId=%s" % (token, collection_id)
url = "%s/action/Follow/Follow" % item.extra
title = "Seguir Colección: %s" % item.foldername
if "dejar de seguir" in data:
title = "Dejar de seguir la colección: %s" % item.foldername
url = "%s/action/Follow/UnFollow" % item.extra
itemlist.append(item.clone(action="seguir", title=title, url=url, post=post, text_color=color5, folder=False))
itemlist.append(
item.clone(action="colecciones", title="Ver colecciones del usuario: %s" % usuario, url=url_usuario))
return itemlist
def colecciones(item):
logger.info()
from core import jsontools
itemlist = []
usuario = False
data = httptools.downloadpage(item.url).data
if "Ver colecciones del usuario" not in item.title and not item.index:
data = jsontools.load(data)["Data"]
content = data["Content"]
content = re.sub(r"\n|\r|\t|\s{2}|&nbsp;|<br>", "", content)
else:
usuario = True
if item.follow:
content = scrapertools.find_single_match(data,
'id="followed_collections"(.*?)<div id="recommended_collections"')
else:
content = scrapertools.find_single_match(data,
'<div id="collections".*?<div class="collections_list(.*?)<div class="collections_list')
content = re.sub(r"\n|\r|\t|\s{2}|&nbsp;|<br>", "", content)
patron = '<a class="name" href="([^"]+)".*?>([^<]+)<.*?src="([^"]+)".*?<p class="info">(.*?)</p>'
matches = scrapertools.find_multiple_matches(content, patron)
index = ""
if item.index and item.index != "0":
matches = matches[item.index:item.index + 20]
if len(matches) > item.index + 20:
index = item.index + 20
elif len(matches) > 20:
matches = matches[:20]
index = 20
folder = filetools.join(config.get_data_path(), 'thumbs_copiapop')
for url, scrapedtitle, thumb, info in matches:
url = item.extra + url + "/gallery,1,1?ref=pager"
title = "%s (%s)" % (scrapedtitle, scrapertools.htmlclean(info))
try:
scrapedthumbnail = filetools.join(folder, "%s.jpg" % thumb.split("e=", 1)[1][-20:])
except:
try:
scrapedthumbnail = filetools.join(folder, "%s.jpg" % thumb.split("/thumbnail/", 1)[1][-20:])
thumb = thumb.replace("/thumbnail/", "/")
except:
scrapedthumbnail = ""
if scrapedthumbnail:
t = threading.Thread(target=download_thumb, args=[scrapedthumbnail, thumb])
t.setDaemon(True)
t.start()
else:
scrapedthumbnail = thumb
itemlist.append(Item(channel=item.channel, action="listado", title=title, url=url,
thumbnail=scrapedthumbnail, text_color=color2, extra=item.extra,
foldername=scrapedtitle))
if not usuario and data.get("NextPageUrl"):
url = item.extra + data["NextPageUrl"]
itemlist.append(item.clone(title=">> Página Siguiente", url=url, text_color=""))
elif index:
itemlist.append(item.clone(title=">> Página Siguiente", url=item.url, index=index, text_color=""))
return itemlist
def seguir(item):
logger.info()
data = httptools.downloadpage(item.url, item.post)
message = "Colección seguida"
if "Dejar" in item.title:
message = "La colección ya no se sigue"
if data.sucess and config.get_platform() != "plex":
from platformcode import platformtools
platformtools.dialog_notification("Acción correcta", message)
def cuenta(item):
logger.info()
import urllib
itemlist = []
web = "copiapop"
if "diskokosmiko" in item.extra:
web = "diskokosmiko"
logueado, error_message = login("diskokosmiko.mx")
if not logueado:
itemlist.append(item.clone(title=error_message, action="configuracion", folder=False))
return itemlist
user = config.get_setting("%suser" % web, "copiapop")
user = unicode(user, "utf8").lower().encode("utf8")
url = item.extra + "/" + urllib.quote(user)
data = httptools.downloadpage(url).data
num_col = scrapertools.find_single_match(data, 'name="Has_collections" value="([^"]+)"')
if num_col != "0":
itemlist.append(item.clone(action="colecciones", url=url, index="0", title="Ver mis colecciones",
text_color=color5))
else:
itemlist.append(item.clone(action="", title="No tienes ninguna colección", text_color=color4))
num_follow = scrapertools.find_single_match(data, 'name="Follows_collections" value="([^"]+)"')
if num_follow != "0":
itemlist.append(item.clone(action="colecciones", url=url, index="0", title="Colecciones que sigo",
text_color=color5, follow=True))
else:
itemlist.append(item.clone(action="", title="No sigues ninguna colección", text_color=color4))
return itemlist
def filtro(item):
logger.info()
list_controls = []
valores = {}
dict_values = None
list_controls.append({'id': 'search', 'label': 'Texto a buscar', 'enabled': True, 'color': '0xFFC52020',
'type': 'text', 'default': '', 'visible': True})
list_controls.append({'id': 'tipo', 'label': 'Tipo de búsqueda', 'enabled': True, 'color': '0xFFFF8000',
'type': 'list', 'default': -1, 'visible': True})
list_controls[1]['lvalues'] = ['Aplicación', 'Archivo', 'Documento', 'Imagen', 'Música', 'Vídeo', 'Todos']
valores['tipo'] = ['Application', 'Archive', 'Document', 'Image', 'Music', 'Video', '']
list_controls.append({'id': 'ext', 'label': 'Extensión', 'enabled': True, 'color': '0xFFF4FA58',
'type': 'text', 'default': '', 'visible': True})
list_controls.append({'id': 'tmin', 'label': 'Tamaño mínimo (MB)', 'enabled': True, 'color': '0xFFCC2EFA',
'type': 'text', 'default': '0', 'visible': True})
list_controls.append({'id': 'tmax', 'label': 'Tamaño máximo (MB)', 'enabled': True, 'color': '0xFF2ECCFA',
'type': 'text', 'default': '0', 'visible': True})
# Se utilizan los valores por defecto/guardados
web = "copiapop"
if "diskokosmiko" in item.extra:
web = "diskokosmiko"
valores_guardados = config.get_setting("filtro_defecto_" + web, item.channel)
if valores_guardados:
dict_values = valores_guardados
item.valores = valores
from platformcode import platformtools
return platformtools.show_channel_settings(list_controls=list_controls, dict_values=dict_values,
caption="Filtra la búsqueda", item=item, callback='filtrado')
def filtrado(item, values):
values_copy = values.copy()
web = "copiapop"
if "diskokosmiko" in item.extra:
web = "diskokosmiko"
# Guarda el filtro para que sea el que se cargue por defecto
config.set_setting("filtro_defecto_" + web, values_copy, item.channel)
tipo = item.valores["tipo"][values["tipo"]]
search = values["search"]
ext = values["ext"]
tmin = values["tmin"]
tmax = values["tmax"]
if not tmin.isdigit():
tmin = "0"
if not tmax.isdigit():
tmax = "0"
item.valores = ""
item.post = "Mode=List&Type=%s&Phrase=%s&SizeFrom=%s&SizeTo=%s&Extension=%s&ref=pager&pageNumber=1" \
% (tipo, search, tmin, tmax, ext)
item.action = "listado"
return listado(item)
def download_thumb(filename, url):
from core import downloadtools
lock = threading.Lock()
lock.acquire()
folder = filetools.join(config.get_data_path(), 'thumbs_copiapop')
if not filetools.exists(folder):
filetools.mkdir(folder)
lock.release()
if not filetools.exists(filename):
downloadtools.downloadfile(url, filename, silent=True)
return filename
def delete_cache(url):
folder = filetools.join(config.get_data_path(), 'thumbs_copiapop')
filetools.rmdirtree(folder)
if config.is_xbmc():
import xbmc
xbmc.executebuiltin("Container.Refresh")

37
kodi/channels/crimenes.json Executable file
View File

@@ -0,0 +1,37 @@
{
"id": "crimenes",
"name": "Crimenes Imperfectos",
"active": true,
"adult": false,
"language": "es",
"banner": "crimenes.png",
"thumbnail": "crimenes.png",
"version": 1,
"changes": [
{
"date": "19/06/2017",
"description": "correcion xml"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"movie"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": false
}
]
}

167
kodi/channels/crimenes.py Executable file
View File

@@ -0,0 +1,167 @@
# -*- coding: utf-8 -*-
import re
import urlparse
import xbmc
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
# Main list manual
def listav(item):
itemlist = []
data = scrapertools.cache_page(item.url)
patronbloque = '<li><div class="yt-lockup.*?<img.*?src="([^"]+)".*?'
patronbloque += '<h3 class="yt-lockup-title "><a href="([^"]+)".*?title="([^"]+)".*?'
patronbloque += '</a><span class=.*?">(.*?)</span></h3>'
matchesbloque = re.compile(patronbloque, re.DOTALL).findall(data)
scrapertools.printMatches(matchesbloque)
scrapedduration = ''
for scrapedthumbnail, scrapedurl, scrapedtitle, scrapedduration in matchesbloque:
scrapedtitle = '[COLOR white]' + scrapedtitle + '[/COLOR] [COLOR red]' + scrapedduration + '[/COLOR]'
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = urlparse.urljoin(item.thumbnail, scrapedthumbnail)
xbmc.log("$ " + scrapedurl + " " + scrapedtitle + " " + scrapedthumbnail)
itemlist.append(Item(channel=item.channel, action="play", title=scrapedtitle, fulltitle=scrapedtitle, url=url,
thumbnail=thumbnail, fanart=thumbnail))
# Paginacion
patronbloque = '<div class="branded-page-box .*? spf-link ">(.*?)</div>'
matches = re.compile(patronbloque, re.DOTALL).findall(data)
for bloque in matches:
patronvideo = '<a href="([^"]+)"'
matchesx = re.compile(patronvideo, re.DOTALL).findall(bloque)
for scrapedurl in matchesx:
url = urlparse.urljoin(item.url, 'https://www.youtube.com' + scrapedurl)
# solo me quedo con el ultimo enlace
itemlist.append(
Item(channel=item.channel, action="listav", title="Siguiente pag >>", fulltitle="Siguiente Pag >>", url=url,
thumbnail=item.thumbnail, fanart=item.fanart))
return itemlist
def busqueda(item):
itemlist = []
keyboard = xbmc.Keyboard("", "Busqueda")
keyboard.doModal()
if (keyboard.isConfirmed()):
myurl = keyboard.getText().replace(" ", "+")
data = scrapertools.cache_page('https://www.youtube.com/results?q=' + myurl)
data = data.replace("\n", "").replace("\t", "")
data = scrapertools.decodeHtmlentities(data)
patronbloque = '<li><div class="yt-lockup.*?<img.*?src="([^"]+)".*?'
patronbloque += '<h3 class="yt-lockup-title "><a href="([^"]+)".*?title="([^"]+)".*?'
patronbloque += '</a><span class=.*?">(.*?)</span></h3>'
matchesbloque = re.compile(patronbloque, re.DOTALL).findall(data)
scrapertools.printMatches(matchesbloque)
for scrapedthumbnail, scrapedurl, scrapedtitle, scrapedduracion in matchesbloque:
scrapedtitle = scrapedtitle + ' ' + scrapedduracion
url = scrapedurl
thumbnail = scrapedthumbnail
xbmc.log("$ " + scrapedurl + " " + scrapedtitle + " " + scrapedthumbnail)
itemlist.append(
Item(channel=item.channel, action="play", title=scrapedtitle, fulltitle=scrapedtitle, url=url,
thumbnail=thumbnail, fanart=thumbnail))
# Paginacion
patronbloque = '<div class="branded-page-box .*? spf-link ">(.*?)</div>'
matches = re.compile(patronbloque, re.DOTALL).findall(data)
for bloque in matches:
patronvideo = '<a href="([^"]+)"'
matchesx = re.compile(patronvideo, re.DOTALL).findall(bloque)
for scrapedurl in matchesx:
url = 'https://www.youtube.com' + scrapedurl
# solo me quedo con el ultimo enlace
itemlist.append(
Item(channel=item.channel, action="listav", title="Siguiente pag >>", fulltitle="Siguiente Pag >>",
url=url))
return itemlist
else:
# xbmcgui.Dialog().ok(item.channel, "nada que buscar")
# xbmc.executebuiltin("Action(up)")
xbmc.executebuiltin("Action(enter)")
# itemlist.append( Item(channel=item.channel, action="listav", title="<< Volver", fulltitle="Volver" , url="history.back()") )
def mainlist(item):
logger.info()
itemlist = []
item.url = 'https://www.youtube.com/results?q=crimenes+imperfectos&sp=CAI%253D'
scrapedtitle = "[COLOR white]Crimenes [COLOR red]Imperfectos[/COLOR]"
item.thumbnail = urlparse.urljoin(item.thumbnail,
"https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQ2PcyvcYIg6acvdUZrHGFFk_E3mXK9QSh-5TypP8Rk6zQ6S1yb2g")
item.fanart = urlparse.urljoin(item.fanart,
"https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQ2PcyvcYIg6acvdUZrHGFFk_E3mXK9QSh-5TypP8Rk6zQ6S1yb2g")
itemlist.append(
Item(channel=item.channel, action="listav", title=scrapedtitle, fulltitle=scrapedtitle, url=item.url,
thumbnail=item.thumbnail, fanart=item.fanart))
item.url = 'https://www.youtube.com/results?search_query=russian+dash+cam&sp=CAI%253D'
scrapedtitle = "[COLOR blue]Russian[/COLOR] [COLOR White]Dash[/COLOR] [COLOR red]Cam[/COLOR]"
item.thumbnail = urlparse.urljoin(item.thumbnail, "https://i.ytimg.com/vi/-C6Ftromtig/maxresdefault.jpg")
item.fanart = urlparse.urljoin(item.fanart,
"https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcRQLO-n-kO1ByY8lLhKxz0-cejJD1J7rLge_j0E0Gh9LJ2WtTbSnA")
itemlist.append(
Item(channel=item.channel, action="listav", title=scrapedtitle, fulltitle=scrapedtitle, url=item.url,
thumbnail=item.thumbnail, fanart=item.fanart))
item.url = 'https://www.youtube.com/results?search_query=cuarto+milenio+programa+completo&sp=CAI%253D'
scrapedtitle = "[COLOR green]Cuarto[/COLOR] [COLOR White]Milenio[/COLOR]"
item.thumbnail = urlparse.urljoin(item.thumbnail,
"http://cuatrostatic-a.akamaihd.net/cuarto-milenio/Cuarto-Milenio-analiza-fantasma-Granada_MDSVID20100924_0063_3.jpg")
item.fanart = urlparse.urljoin(item.fanart,
"http://cuatrostatic-a.akamaihd.net/cuarto-milenio/programas/temporada-07/t07xp32/fantasma-universidad_MDSVID20120420_0001_3.jpg")
itemlist.append(
Item(channel=item.channel, action="listav", title=scrapedtitle, fulltitle=scrapedtitle, url=item.url,
thumbnail=item.thumbnail, fanart=item.fanart))
item.url = 'https://www.youtube.com/results?q=milenio+3&sp=CAI%253D'
scrapedtitle = "[COLOR green]Milenio[/COLOR] [COLOR White]3- Podcasts[/COLOR]"
item.thumbnail = urlparse.urljoin(item.thumbnail,
"http://cuatrostatic-a.akamaihd.net/cuarto-milenio/Cuarto-Milenio-analiza-fantasma-Granada_MDSVID20100924_0063_3.jpg")
item.fanart = urlparse.urljoin(item.fanart,
"http://cuatrostatic-a.akamaihd.net/cuarto-milenio/programas/temporada-07/t07xp32/fantasma-universidad_MDSVID20120420_0001_3.jpg")
itemlist.append(
Item(channel=item.channel, action="listav", title=scrapedtitle, fulltitle=scrapedtitle, url=item.url,
thumbnail=item.thumbnail, fanart=item.fanart))
scrapedtitle = "[COLOR red]buscar ...[/COLOR]"
item.thumbnail = urlparse.urljoin(item.thumbnail,
"http://cuatrostatic-a.akamaihd.net/cuarto-milenio/Cuarto-Milenio-analiza-fantasma-Granada_MDSVID20100924_0063_3.jpg")
item.fanart = urlparse.urljoin(item.fanart,
"http://cuatrostatic-a.akamaihd.net/cuarto-milenio/programas/temporada-07/t07xp32/fantasma-universidad_MDSVID20120420_0001_3.jpg")
itemlist.append(Item(channel=item.channel, action="busqueda", title=scrapedtitle, fulltitle=scrapedtitle,
thumbnail=item.thumbnail, fanart=item.fanart))
return itemlist
def play(item):
logger.info("url=" + item.url)
itemlist = servertools.find_video_items(data=item.url)
return itemlist

102
kodi/channels/crunchyroll.json Executable file
View File

@@ -0,0 +1,102 @@
{
"id": "crunchyroll",
"name": "Crunchyroll",
"language": "es",
"active": true,
"adult": false,
"version": 1,
"changes": [
{
"date": "16/05/2017",
"description": "Primera versión"
}
],
"thumbnail": "http://i.imgur.com/O49fDS1.png",
"categories": [
"anime",
"tvshow"
],
"settings": [
{
"id": "crunchyrolluser",
"type": "text",
"color": "0xFF25AA48",
"label": "@30014",
"enabled": true,
"visible": true
},
{
"id": "crunchyrollpassword",
"type": "text",
"color": "0xFF25AA48",
"hidden": true,
"label": "@30015",
"enabled": "!eq(-1,'')",
"visible": true
},
{
"id": "crunchyrollidioma",
"type": "list",
"label": "Idioma de los textos de la web",
"default": 6,
"enabled": true,
"visible": true,
"lvalues": [
"Alemán",
"Portugués",
"Francés",
"Italiano",
"Inglés",
"Español Latino",
"Español España"
]
},
{
"id": "crunchyrollsub",
"type": "list",
"label": "Idioma de subtítulos preferido en Crunchyroll",
"default": 6,
"enabled": true,
"visible": true,
"lvalues": [
"Alemán",
"Portugués",
"Francés",
"Italiano",
"Inglés",
"Español Latino",
"Español España"
]
},
{
"id": "proxy_usa",
"type": "bool",
"label": "Usar proxy para ver el catálogo de USA",
"default": false,
"visible": true,
"enabled": "!eq(+1,true)"
},
{
"id": "proxy_spain",
"type": "bool",
"label": "Usar proxy para ver el catálogo de España",
"default": false,
"visible": true,
"enabled": "!eq(-1,true)"
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 3,
"enabled": true,
"visible": true,
"lvalues": [
"Sin color",
"Perfil 3",
"Perfil 2",
"Perfil 1"
]
}
]
}

360
kodi/channels/crunchyroll.py Executable file
View File

@@ -0,0 +1,360 @@
# -*- coding: utf-8 -*-
import re
import urllib
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core.item import Item
__perfil__ = config.get_setting('perfil', "crunchyroll")
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00', '0xFFFE2E2E', '0xFF088A08'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E', '0xFFFE2E2E', '0xFF088A08'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE', '0xFFFE2E2E', '0xFF088A08']]
if __perfil__ - 1 >= 0:
color1, color2, color3, color4, color5 = perfil[__perfil__ - 1]
else:
color1 = color2 = color3 = color4 = color5 = ""
host = "http://www.crunchyroll.com"
proxy_u = "http://anonymouse.org/cgi-bin/anon-www.cgi/"
proxy_e = "http://proxyanonimo.es/browse.php?u="
def login():
logger.info()
langs = ['deDE', 'ptPT', 'frFR', 'itIT', 'enUS', 'esLA', 'esES']
lang = langs[config.get_setting("crunchyrollidioma", "crunchyroll")]
httptools.downloadpage("http://www.crunchyroll.com/ajax/", "req=RpcApiTranslation_SetLang&locale=%s" % lang)
login_page = "https://www.crunchyroll.com/login"
user = config.get_setting("crunchyrolluser", "crunchyroll")
password = config.get_setting("crunchyrollpassword", "crunchyroll")
if not user or not password:
return False, "", ""
data = httptools.downloadpage(login_page).data
if not "<title>Redirecting" in data:
token = scrapertools.find_single_match(data, 'name="login_form\[_token\]" value="([^"]+)"')
redirect_url = scrapertools.find_single_match(data, 'name="login_form\[redirect_url\]" value="([^"]+)"')
post = "login_form%5Bname%5D=" + user + "&login_form%5Bpassword%5D=" + password + \
"&login_form%5Bredirect_url%5D=" + redirect_url + "&login_form%5B_token%5D=" + token
data = httptools.downloadpage(login_page, post).data
if not "<title>Redirecting" in data:
if "Usuario %s no disponible" % user in data:
return False, "El usuario de crunchyroll no existe.", ""
elif '<li class="error">Captcha' in data:
return False, "Es necesario resolver un captcha. Loguéate desde un navegador y vuelve a intentarlo", ""
else:
return False, "No se ha podido realizar el login.", ""
data = httptools.downloadpage(host).data
premium = scrapertools.find_single_match(data, ',"premium_status":"([^"]+)"')
premium = premium.replace("_", " ").replace("free trial", "Prueba Gratuita").capitalize()
locate = scrapertools.find_single_match(data, 'title="Your detected location is (.*?)."')
if locate:
premium += " - %s" % locate
return True, "", premium
def mainlist(item):
logger.info()
itemlist = []
item.text_color = color1
proxy_usa = config.get_setting("proxy_usa", "crunchyroll")
proxy_spain = config.get_setting("proxy_spain", "crunchyroll")
item.login = False
error_message = ""
global host
if not proxy_usa and not proxy_spain:
item.login, error_message, premium = login()
elif proxy_usa:
item.proxy = "usa"
host = proxy_u + host
elif proxy_spain:
httptools.downloadpage("http://proxyanonimo.es/")
item.proxy = "spain"
host = proxy_e + host
if not item.login and error_message:
itemlist.append(item.clone(title=error_message, action="configuracion", folder=False, text_color=color4))
elif item.login:
itemlist.append(item.clone(title="Tipo de cuenta: %s" % premium, action="", text_color=color4))
elif item.proxy:
itemlist.append(item.clone(title="Usando proxy: %s" % item.proxy.capitalize(), action="", text_color=color4))
itemlist.append(item.clone(title="Anime", action="", text_color=color2))
item.contentType = "tvshow"
itemlist.append(
item.clone(title=" Novedades", action="lista", url=host + "/videos/anime/updated/ajax_page?pg=0", page=0))
itemlist.append(
item.clone(title=" Popular", action="lista", url=host + "/videos/anime/popular/ajax_page?pg=0", page=0))
itemlist.append(item.clone(title=" Emisiones Simultáneas", action="lista",
url=host + "/videos/anime/simulcasts/ajax_page?pg=0", page=0))
itemlist.append(item.clone(title=" Índices", action="indices"))
itemlist.append(item.clone(title="Drama", action="", text_color=color2))
itemlist.append(
item.clone(title=" Popular", action="lista", url=host + "/videos/drama/popular/ajax_page?pg=0", page=0))
itemlist.append(item.clone(title=" Índice Alfabético", action="indices",
url="http://www.crunchyroll.com/videos/drama/alpha"))
if item.proxy != "usa":
itemlist.append(item.clone(action="calendario", title="Calendario de Estrenos Anime", text_color=color4,
url=host + "/simulcastcalendar"))
itemlist.append(item.clone(title="Configuración del canal", action="configuracion", text_color="gold"))
return itemlist
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
user = config.get_setting("crunchyrolluser", "crunchyroll")
password = config.get_setting("crunchyrollpassword", "crunchyroll")
sub = config.get_setting("crunchyrollsub", "crunchyroll")
config.set_setting("crunchyrolluser", user)
config.set_setting("crunchyrollpassword", password)
values = [6, 5, 4, 3, 2, 1, 0]
config.set_setting("crunchyrollsub", str(values[sub]))
platformtools.itemlist_refresh()
return ret
def lista(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
next = item.url.replace("?pg=%s" % item.page, "?pg=%s" % str(item.page + 1))
data_next = httptools.downloadpage(next).data
patron = '<li id="media_group_(\d+)".*?title="([^"]+)".*?href="([^"]+)".*?src="([^"]+)"' \
'.*?<span class="series-data.*?>\s*([^<]+)</span>'
matches = scrapertools.find_multiple_matches(data, patron)
for id, title, url, thumb, videos in matches:
if item.proxy == "spain":
url = "http://proxyanonimo.es" + url.replace("&amp;b=12", "")
elif not item.proxy:
url = host + url
thumb = urllib.unquote(thumb.replace("/browse.php?u=", "").replace("_thumb", "_full").replace("&amp;b=12", ""))
scrapedtitle = "%s (%s)" % (title, videos.strip())
plot = scrapertools.find_single_match(data, '%s"\).data.*?description":"([^"]+)"' % id)
plot = unicode(plot, 'unicode-escape', "ignore")
itemlist.append(item.clone(action="episodios", url=url, title=scrapedtitle, thumbnail=thumb,
contentTitle=title, contentSerieName=title, infoLabels={'plot': plot},
text_color=color2))
if '<li id="media_group' in data_next:
itemlist.append(item.clone(action="lista", url=next, title=">> Página Siguiente", page=item.page + 1,
text_color=""))
return itemlist
def episodios(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r'\n|\t|\s{2,}', '', data)
patron = '<li id="showview_videos.*?href="([^"]+)".*?(?:src|data-thumbnailUrl)="([^"]+)".*?media_id="([^"]+)" ' \
'style="width: (.*?)%.*?<span class="series-title.*?>\s*(.*?)</span>.*?<p class="short-desc".*?>' \
'\s*(.*?)</p>.*?description":"([^"]+)"'
if data.count('class="season-dropdown') > 1:
bloques = scrapertools.find_multiple_matches(data, 'class="season-dropdown[^"]+" title="([^"]+)"(.*?)</ul>')
for season, b in bloques:
matches = scrapertools.find_multiple_matches(b, patron)
if matches:
itemlist.append(item.clone(action="", title=season, text_color=color3))
for url, thumb, media_id, visto, title, subt, plot in matches:
if item.proxy == "spain":
url = urllib.unquote(url.replace("/browse.php?u=", "").replace("&amp;b=12", ""))
elif not item.proxy:
url = host + url
url = url.replace(proxy_u, "")
thumb = urllib.unquote(
thumb.replace("/browse.php?u=", "").replace("_wide.", "_full.").replace("&amp;b=12", ""))
title = " %s - %s" % (title, subt)
if visto != "0":
title += " [COLOR %s][V][/COLOR]" % color5
itemlist.append(
Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumb, media_id=media_id,
server="crunchyroll", text_color=item.text_color, contentTitle=item.contentTitle,
contentSerieName=item.contentSerieName, contentType="tvshow"))
else:
matches = scrapertools.find_multiple_matches(data, patron)
for url, thumb, media_id, visto, title, subt, plot in matches:
if item.proxy == "spain":
url = urllib.unquote(url.replace("/browse.php?u=", "").replace("&amp;b=12", ""))
elif not item.proxy:
url = host + url
url = url.replace(proxy_u, "")
thumb = urllib.unquote(
thumb.replace("/browse.php?u=", "").replace("_wide.", "_full.").replace("&amp;b=12", ""))
title = "%s - %s" % (title, subt)
if visto != "0":
title += " [COLOR %s][V][/COLOR]" % color5
itemlist.append(
Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumb, media_id=media_id,
server="crunchyroll", text_color=item.text_color, contentTitle=item.contentTitle,
contentSerieName=item.contentSerieName, contentType="tvshow"))
return itemlist
def indices(item):
logger.info()
itemlist = []
if not item.url:
itemlist.append(item.clone(title="Alfabético", url="http://www.crunchyroll.com/videos/anime/alpha"))
itemlist.append(item.clone(title="Géneros", url="http://www.crunchyroll.com/videos/anime"))
itemlist.append(item.clone(title="Temporadas", url="http://www.crunchyroll.com/videos/anime"))
else:
data = httptools.downloadpage(item.url).data
if "Alfabético" in item.title:
bloque = scrapertools.find_single_match(data, '<div class="content-menu cf ">(.*?)</div>')
matches = scrapertools.find_multiple_matches(bloque, '<a href="([^"]+)".*?>([^<]+)<')
for url, title in matches:
if "todo" in title:
continue
if item.proxy == "spain":
url = proxy_e + host + url
elif item.proxy == "usa":
url = proxy_u + host + url
else:
url = host + url
itemlist.append(item.clone(action="alpha", title=title, url=url, page=0))
elif "Temporadas" in item.title:
bloque = scrapertools.find_single_match(data,
'<div class="season-selectors cf selectors">(.*?)<div id="container"')
matches = scrapertools.find_multiple_matches(bloque, 'href="#([^"]+)".*?title="([^"]+)"')
for url, title in matches:
url += "/ajax_page?pg=0"
if item.proxy == "spain":
url = proxy_e + host + url
elif item.proxy == "usa":
url = proxy_u + host + url
else:
url = host + url
itemlist.append(item.clone(action="lista", title=title, url=url, page=0))
else:
bloque = scrapertools.find_single_match(data, '<div class="genre-selectors selectors">(.*?)</div>')
matches = scrapertools.find_multiple_matches(bloque, '<input id="([^"]+)".*?title="([^"]+)"')
for url, title in matches:
url = "%s/genres/ajax_page?pg=0&tagged=%s" % (item.url, url)
if item.proxy == "spain":
url = proxy_e + url.replace("&", "%26")
elif item.proxy == "usa":
url = proxy_u + url
itemlist.append(item.clone(action="lista", title=title, url=url, page=0))
return itemlist
def alpha(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<div class="wrapper hover-toggle-queue.*?title="([^"]+)".*?href="([^"]+)".*?src="([^"]+)"' \
'.*?<span class="series-data.*?>\s*([^<]+)</span>.*?<p.*?>(.*?)</p>'
matches = scrapertools.find_multiple_matches(data, patron)
for title, url, thumb, videos, plot in matches:
if item.proxy == "spain":
url = "http://proxyanonimo.es" + url.replace("&amp;b=12", "")
elif not item.proxy:
url = host + url
thumb = urllib.unquote(thumb.replace("/browse.php?u=", "").replace("_small", "_full").replace("&amp;b=12", ""))
scrapedtitle = "%s (%s)" % (title, videos.strip())
itemlist.append(item.clone(action="episodios", url=url, title=scrapedtitle, thumbnail=thumb,
contentTitle=title, contentSerieName=title, infoLabels={'plot': plot},
text_color=color2))
return itemlist
def calendario(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<div class="specific-date">.*?datetime="\d+-(\d+)-(\d+).*?class="day-name">.*?>\s*([^<]+)</time>(.*?)</section>'
bloques = scrapertools.find_multiple_matches(data, patron)
for mes, dia, title, b in bloques:
patron = 'class="available-time">([^<]+)<.*?<cite itemprop="name">(.*?)</cite>.*?href="([^"]+)"' \
'.*?>\s*(.*?)\s*</a>(.*?)</article>'
matches = scrapertools.find_multiple_matches(b, patron)
if matches:
title = "%s/%s - %s" % (dia, mes, title.strip())
itemlist.append(item.clone(action="", title=title))
for hora, title, url, subt, datos in matches:
subt = subt.replace("Available", "Disponible").replace("Episode", "Episodio").replace("in ", "en ")
subt = re.sub(r"\s{2,}", " ", subt)
if "<time" in subt:
subt = re.sub(r"<time.*?>", "", subt).replace("</time>", "")
scrapedtitle = " [%s] %s - %s" % (hora, scrapertools.htmlclean(title), subt)
scrapedtitle = re.sub(r"\[email&#160;protected\]|\[email\xc2\xa0protected\]", "Idolm@ster", scrapedtitle)
if "Disponible" in scrapedtitle:
if item.proxy == "spain":
url = urllib.unquote(url.replace("/browse.php?u=", "").replace("&amp;b=12", ""))
action = "play"
server = "crunchyroll"
else:
action = ""
server = ""
thumb = scrapertools.find_single_match(datos, '<img class="thumbnail" src="([^"]+)"')
if not thumb:
thumb = scrapertools.find_single_match(datos, 'src="([^"]+)"')
if thumb:
thumb = urllib.unquote(thumb.replace("/browse.php?u=", "").replace("_thumb", "_full") \
.replace("&amp;b=12", "").replace("_large", "_full"))
itemlist.append(item.clone(action=action, url=url, title=scrapedtitle, contentTitle=title, thumbnail=thumb,
text_color=color2, contentSerieName=title, server=server))
next = scrapertools.find_single_match(data, 'js-pagination-next"\s*href="([^"]+)"')
if next:
if item.proxy == "spain":
next = "http://proxyanonimo.es" + url.replace("&amp;b=12", "")
else:
next = host + next
itemlist.append(item.clone(action="calendario", url=next, title=">> Siguiente Semana"))
prev = scrapertools.find_single_match(data, 'js-pagination-last"\s*href="([^"]+)"')
if prev:
if item.proxy == "spain":
prev = "http://proxyanonimo.es" + url.replace("&amp;b=12", "")
else:
prev = host + prev
itemlist.append(item.clone(action="calendario", url=prev, title="<< Semana Anterior"))
return itemlist
def play(item):
logger.info()
if item.login and not "[V]" in item.title:
post = "cbelapsed=60&h=&media_id=%s" % item.media_id + "&req=RpcApiVideo_VideoView&cbcallcount=1&ht=0" \
"&media_type=1&video_encode_id=0&playhead=10000"
httptools.downloadpage("http://www.crunchyroll.com/ajax/", post)
return [item]

41
kodi/channels/cuelgame.json Executable file
View File

@@ -0,0 +1,41 @@
{
"id": "cuelgame",
"name": "Cuelgame",
"active": true,
"adult": false,
"language": "es",
"version": 1,
"changes": [
{
"date": "10/12/2016",
"description": "Reparado fanart y thumbs y correción código.Adaptado a Infoplus"
},
{
"date": "04/04/2017",
"description": "Migración a Httptools"
},
{
"date": "28/06/2017",
"description": "Correciones código.Algunas mejoras"
}
],
"thumbnail": "cuelgame.png",
"banner": "cuelgame.png",
"categories": [
"torrent",
"movie",
"tvshow",
"documentary",
"vos"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}

1225
kodi/channels/cuelgame.py Executable file

File diff suppressed because it is too large Load Diff

23
kodi/channels/cumlouder.json Executable file
View File

@@ -0,0 +1,23 @@
{
"id": "cumlouder",
"name": "Cumlouder",
"active": true,
"adult": true,
"language": "es",
"thumbnail": "cumlouder.png",
"banner": "cumlouder.png",
"version": 1,
"changes": [
{
"date": "04/05/17",
"description": "Corregido, usa proxy en caso de error con https"
},
{
"date": "13/01/17",
"description": "First version"
}
],
"categories": [
"adult"
]
}

191
kodi/channels/cumlouder.py Executable file
View File

@@ -0,0 +1,191 @@
# -*- coding: utf-8 -*-
import re
import urllib
import urlparse
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core.item import Item
def mainlist(item):
logger.info()
itemlist = []
config.set_setting("url_error", False, "cumlouder")
itemlist.append(item.clone(title="Últimos videos", action="videos", url="https://www.cumlouder.com/"))
itemlist.append(item.clone(title="Categorias", action="categorias", url="https://www.cumlouder.com/categories/"))
itemlist.append(item.clone(title="Pornstars", action="pornstars_list", url="https://www.cumlouder.com/girls/"))
itemlist.append(item.clone(title="Buscar", action="search", url="https://www.cumlouder.com/search?q=%s"))
return itemlist
def search(item, texto):
logger.info()
item.url = item.url % texto
item.action = "videos"
try:
return videos(item)
except:
import traceback
logger.error(traceback.format_exc())
return []
def pornstars_list(item):
logger.info()
itemlist = []
for letra in "abcdefghijklmnopqrstuvwxyz":
itemlist.append(item.clone(title=letra.upper(), url=urlparse.urljoin(item.url, letra), action="pornstars"))
return itemlist
def pornstars(item):
logger.info()
itemlist = []
data = get_data(item.url)
patron = '<a girl-url="[^"]+" class="[^"]+" href="([^"]+)" title="([^"]+)">[^<]+'
patron += '<img class="thumb" src="([^"]+)" [^<]+<h2[^<]+<span[^<]+</span[^<]+</h2[^<]+'
patron += '<span[^<]+<span[^<]+<span[^<]+</span>([^<]+)</span>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, count in matches:
if "go.php?" in url:
url = urllib.unquote(url.split("/go.php?u=")[1].split("&")[0])
thumbnail = urllib.unquote(thumbnail.split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, url)
if not thumbnail.startswith("https"):
thumbnail = "https:%s" % thumbnail
itemlist.append(item.clone(title="%s (%s)" % (title, count), url=url, action="videos", thumbnail=thumbnail))
# Paginador
matches = re.compile('<li[^<]+<a href="([^"]+)" rel="nofollow">Next[^<]+</a[^<]+</li>', re.DOTALL).findall(data)
if matches:
if "go.php?" in matches[0]:
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, matches[0])
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
return itemlist
def categorias(item):
logger.info()
itemlist = []
data = get_data(item.url)
# logger.info("channels.cumlouder data="+data)
patron = '<a tag-url="[^"]+" class="[^"]+" href="([^"]+)" title="([^"]+)">[^<]+'
patron += '<img class="thumb" src="([^"]+)".*?<span class="cantidad">([^"]+)</span>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, count in matches:
if "go.php?" in url:
url = urllib.unquote(url.split("/go.php?u=")[1].split("&")[0])
thumbnail = urllib.unquote(thumbnail.split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, url)
if not thumbnail.startswith("https"):
thumbnail = "https:%s" % thumbnail
itemlist.append(
item.clone(title="%s (%s videos)" % (title, count), url=url, action="videos", thumbnail=thumbnail))
# Paginador
matches = re.compile('<li[^<]+<a href="([^"]+)" rel="nofollow">Next[^<]+</a[^<]+</li>', re.DOTALL).findall(data)
if matches:
if "go.php?" in matches[0]:
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, matches[0])
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
return itemlist
def videos(item):
logger.info()
itemlist = []
data = get_data(item.url)
patron = '<a class="muestra-escena" href="([^"]+)" title="([^"]+)"[^<]+<img class="thumb" src="([^"]+)".*?<span class="minutos"> <span class="ico-minutos sprite"></span> ([^<]+)</span>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, duration in matches:
if "go.php?" in url:
url = urllib.unquote(url.split("/go.php?u=")[1].split("&")[0])
thumbnail = urllib.unquote(thumbnail.split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin("https://www.cumlouder.com", url)
if not thumbnail.startswith("https"):
thumbnail = "https:%s" % thumbnail
itemlist.append(item.clone(title="%s (%s)" % (title, duration), url=urlparse.urljoin(item.url, url),
action="play", thumbnail=thumbnail, contentThumbnail=thumbnail,
contentType="movie", contentTitle=title))
# Paginador
nextpage = scrapertools.find_single_match(data, '<ul class="paginador"(.*?)</ul>')
matches = re.compile('<a href="([^"]+)" rel="nofollow">Next »</a>', re.DOTALL).findall(nextpage)
if not matches:
matches = re.compile('<li[^<]+<a href="([^"]+)">Next »</a[^<]+</li>', re.DOTALL).findall(nextpage)
if matches:
if "go.php?" in matches[0]:
url = urllib.unquote(matches[0].split("/go.php?u=")[1].split("&")[0])
else:
url = urlparse.urljoin(item.url, matches[0])
itemlist.append(item.clone(title="Pagina Siguiente", url=url))
return itemlist
def play(item):
logger.info()
itemlist = []
data = get_data(item.url)
patron = '<source src="([^"]+)" type=\'video/([^\']+)\' label=\'[^\']+\' res=\'([^\']+)\' />'
url, type, res = re.compile(patron, re.DOTALL).findall(data)[0]
if "go.php?" in url:
url = urllib.unquote(url.split("/go.php?u=")[1].split("&")[0])
elif not url.startswith("http"):
url = "http:" + url.replace("&amp;", "&")
itemlist.append(
Item(channel='cumlouder', action="play", title='Video' + res, fulltitle=type.upper() + ' ' + res, url=url,
server="directo", folder=False))
return itemlist
def get_data(url_orig):
try:
if config.get_setting("url_error", "cumlouder"):
raise Exception
response = httptools.downloadpage(url_orig)
if not response.data or "urlopen error [Errno 1]" in str(response.code):
raise Exception
except:
config.set_setting("url_error", True, "cumlouder")
import random
server_random = ['nl', 'de', 'us']
server = server_random[random.randint(0, 2)]
url = "https://%s.hideproxy.me/includes/process.php?action=update" % server
post = "u=%s&proxy_formdata_server=%s&allowCookies=1&encodeURL=0&encodePage=0&stripObjects=0&stripJS=0&go=" \
% (urllib.quote(url_orig), server)
while True:
response = httptools.downloadpage(url, post, follow_redirects=False)
if response.headers.get("location"):
url = response.headers["location"]
post = ""
else:
break
return response.data

23
kodi/channels/datoporn.json Executable file
View File

@@ -0,0 +1,23 @@
{
"id": "datoporn",
"name": "DatoPorn",
"language": "es",
"active": true,
"adult": true,
"changes": [
{
"date": "28/05/2017",
"description": "Reparado por cambios en la página"
},
{
"date": "21/02/2017",
"description": "Primera versión"
}
],
"version": 1,
"thumbnail": "http://i.imgur.com/tBSWudd.png?1",
"banner": "datoporn.png",
"categories": [
"adult"
]
}

66
kodi/channels/datoporn.py Executable file
View File

@@ -0,0 +1,66 @@
# -*- coding: utf-8 -*-
from core import httptools
from core import logger
from core import scrapertools
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(action="categorias", title="Categorías", url="http://dato.porn/categories_all"))
itemlist.append(item.clone(title="Buscar...", action="search"))
return itemlist
def search(item, texto):
logger.info()
item.url = "http://dato.porn/?k=%s&op=search" % texto.replace(" ", "+")
return lista(item)
def lista(item):
logger.info()
itemlist = []
# Descarga la pagina
data = httptools.downloadpage(item.url).data
# Extrae las entradas
patron = '<div class="vid_block">\s*<a href="([^"]+)".*?url\(\'([^\']+)\'.*?<span>(.*?)</span>.*?<b>(.*?)</b>'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, duration, scrapedtitle in matches:
if "/embed-" not in scrapedurl:
scrapedurl = scrapedurl.replace("datoporn.com/", "datoporn.com/embed-") + ".html"
if duration:
scrapedtitle = "%s - %s" % (duration, scrapedtitle)
itemlist.append(item.clone(action="play", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
server="datoporn", fanart=scrapedthumbnail.replace("_t.jpg", ".jpg")))
# Extrae la marca de siguiente página
next_page = scrapertools.find_single_match(data, "<a href='([^']+)'>Next")
if next_page and itemlist:
itemlist.append(item.clone(action="lista", title=">> Página Siguiente", url=next_page))
return itemlist
def categorias(item):
logger.info()
itemlist = []
# Descarga la pagina
data = httptools.downloadpage(item.url).data
# Extrae las entradas (carpetas)
patron = '<div class="vid_block">\s*<a href="([^"]+)".*?url\((.*?)\).*?<span>(.*?)</span>.*?<b>(.*?)</b>'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, numero, scrapedtitle in matches:
if numero:
scrapedtitle = "%s (%s)" % (scrapedtitle, numero)
itemlist.append(item.clone(action="lista", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail))
return itemlist

View File

@@ -0,0 +1,23 @@
{
"id": "descargacineclasico",
"name": "descargacineclasico",
"language": "es",
"active": true,
"adult": false,
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"banner": "descargacineclasico2.png",
"thumbnail": "descargacineclasico2.png",
"categories": [
"movie"
]
}

View File

@@ -0,0 +1,178 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from channelselector import get_thumbnail_path
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
from core.tmdb import Tmdb
from servers.decrypters import expurl
def agrupa_datos(data):
## Agrupa los datos
data = re.sub(r'\n|\r|\t|&nbsp;|<br>|<!--.*?-->', '', data)
data = re.sub(r'\s+', ' ', data)
data = re.sub(r'>\s<', '><', data)
return data
def mainlist(item):
logger.info()
thumb_buscar = get_thumbnail_path() + "thumb_search.png"
itemlist = []
itemlist.append(Item(channel=item.channel, title="Últimas agregadas", action="agregadas",
url="http://www.descargacineclasico.net/", viewmode="movie_with_plot"))
itemlist.append(Item(channel=item.channel, title="Listado por género", action="porGenero",
url="http://www.descargacineclasico.net/"))
itemlist.append(
Item(channel=item.channel, title="Buscar", action="search", url="http://www.descargacineclasico.net/",
thumbnail=thumb_buscar))
return itemlist
def porGenero(item):
logger.info()
itemlist = []
data = scrapertools.cache_page(item.url)
logger.info("data=" + data)
patron = '<ul class="columnas">(.*?)</ul>'
data = re.compile(patron, re.DOTALL).findall(data)
patron = '<li.*?>.*?href="([^"]+).*?>([^<]+)'
matches = re.compile(patron, re.DOTALL).findall(data[0])
for url, genero in matches:
itemlist.append(
Item(channel=item.channel, action="agregadas", title=genero, url=url, viewmode="movie_with_plot"))
return itemlist
def search(item, texto):
logger.info()
'''
texto_get = texto.replace(" ","%20")
texto_post = texto.replace(" ","+")
item.url = "http://pelisadicto.com/buscar/%s?search=%s" % (texto_get,texto_post)
'''
texto = texto.replace(" ", "+")
item.url = "http://www.descargacineclasico.net/?s=" + texto
try:
return agregadas(item)
# Se captura la excepci?n, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def agregadas(item):
logger.info()
itemlist = []
'''
# Descarga la pagina
if "?search=" in item.url:
url_search = item.url.split("?search=")
data = scrapertools.cache_page(url_search[0], url_search[1])
else:
data = scrapertools.cache_page(item.url)
logger.info("data="+data)
'''
data = scrapertools.cache_page(item.url)
logger.info("data=" + data)
# Extrae las entradas
fichas = re.sub(r"\n|\s{2}", "", scrapertools.get_match(data, '<div class="review-box-container">(.*?)wp-pagenavi'))
# <a href="http://www.descargacineclasico.net/ciencia-ficcion/quatermass-2-1957/"
# title="Quatermass II (Quatermass 2) (1957) Descargar y ver Online">
# <img style="border-radius:6px;"
# src="//www.descargacineclasico.net/wp-content/uploads/2015/12/Quatermass-II-2-1957.jpg"
# alt="Quatermass II (Quatermass 2) (1957) Descargar y ver Online Gratis" height="240" width="160">
patron = '<div class="post-thumbnail"><a href="([^"]+)".*?' # url
patron += 'title="([^"]+)".*?' # title
patron += 'src="([^"]+).*?' # thumbnail
patron += '<p>([^<]+)' # plot
matches = re.compile(patron, re.DOTALL).findall(fichas)
for url, title, thumbnail, plot in matches:
title = title[0:title.find("Descargar y ver Online")]
url = urlparse.urljoin(item.url, url)
thumbnail = urlparse.urljoin(url, thumbnail)
itemlist.append(Item(channel=item.channel, action="findvideos", title=title + " ", fulltitle=title, url=url,
thumbnail=thumbnail, plot=plot, show=title))
# Paginación
try:
# <ul class="pagination"><li class="active"><span>1</span></li><li><span><a href="2">2</a></span></li><li><span><a href="3">3</a></span></li><li><span><a href="4">4</a></span></li><li><span><a href="5">5</a></span></li><li><span><a href="6">6</a></span></li></ul>
patron_nextpage = r'<a class="nextpostslink" rel="next" href="([^"]+)'
next_page = re.compile(patron_nextpage, re.DOTALL).findall(data)
itemlist.append(Item(channel=item.channel, action="agregadas", title="Página siguiente >>", url=next_page[0],
viewmode="movie_with_plot"))
except:
pass
return itemlist
def findvideos(item):
logger.info()
itemlist = []
data = scrapertools.cache_page(item.url)
data = scrapertools.unescape(data)
titulo = item.title
titulo_tmdb = re.sub("([0-9+])", "", titulo.strip())
oTmdb = Tmdb(texto_buscado=titulo_tmdb, idioma_busqueda="es")
item.fanart = oTmdb.get_backdrop()
# Descarga la pagina
# data = scrapertools.cache_page(item.url)
patron = '#div_\d_\D.+?<img id="([^"]+).*?<span>.*?</span>.*?<span>(.*?)</span>.*?imgdes.*?imgdes/([^\.]+).*?<a href=([^\s]+)' # Añado calidad
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedidioma, scrapedcalidad, scrapedserver, scrapedurl in matches:
title = titulo + "_" + scrapedidioma + "_" + scrapedserver + "_" + scrapedcalidad
itemlist.append(Item(channel=item.channel, action="play", title=title, fulltitle=title, url=scrapedurl,
thumbnail=item.thumbnail, plot=item.plot, show=item.show, fanart=item.fanart))
return itemlist
def play(item):
logger.info()
video = expurl.expand_url(item.url)
itemlist = []
itemlist = servertools.find_video_items(data=video)
for videoitem in itemlist:
videoitem.title = item.title
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
return itemlist

64
kodi/channels/descargasmix.json Executable file
View File

@@ -0,0 +1,64 @@
{
"id": "descargasmix",
"name": "DescargasMIX",
"language": "es",
"active": true,
"version": 1,
"adult": false,
"changes": [
{
"date": "06/05/17",
"description": "Cambio de dominio"
},
{
"date": "17/04/17",
"description": "Mejorado en la deteccion del dominio para futuros cambios"
},
{
"date": "09/04/17",
"description": "Arreglado por cambios en la página"
},
{
"date": "27/01/17",
"description": "Sección online en películas modificada"
},
{
"date": "08/07/16",
"description": "Adaptado el canal a las nuevas funciones"
}
],
"thumbnail": "descargasmix.png",
"banner": "descargasmix.png",
"categories": [
"movie",
"latino",
"vos",
"torrent",
"documentary",
"anime",
"tvshow"
],
"settings": [
{
"id": "modo_grafico",
"type": "bool",
"label": "Buscar información extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "perfil",
"type": "list",
"label": "Perfil de color",
"default": 2,
"enabled": true,
"visible": true,
"lvalues": [
"Perfil 3",
"Perfil 2",
"Perfil 1"
]
}
]
}

537
kodi/channels/descargasmix.py Executable file
View File

@@ -0,0 +1,537 @@
# -*- coding: utf-8 -*-
import re
import urllib
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
__modo_grafico__ = config.get_setting("modo_grafico", "descargasmix")
__perfil__ = config.get_setting("perfil", "descargasmix")
# Fijar perfil de color
perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'],
['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E'],
['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE']]
color1, color2, color3 = perfil[__perfil__]
host = config.get_setting("host", "descargasmix")
def mainlist(item):
logger.info()
itemlist = []
item.text_color = color1
# Resetear host y comprobacion de error en https (por si se actualiza Kodi)
config.set_setting("url_error", False, "descargasmix")
host = config.set_setting("host", "https://ddmix.net", "descargasmix")
host_check = get_data(host, True)
if host_check and host_check.startswith("http"):
config.set_setting("host", host_check, "descargasmix")
itemlist.append(item.clone(title="Películas", action="lista", fanart="http://i.imgur.com/c3HS8kj.png"))
itemlist.append(item.clone(title="Series", action="lista_series", fanart="http://i.imgur.com/9loVksV.png"))
itemlist.append(item.clone(title="Documentales", action="entradas", url="%s/documentales/" % host,
fanart="http://i.imgur.com/Q7fsFI6.png"))
itemlist.append(item.clone(title="Anime", action="entradas", url="%s/anime/" % host,
fanart="http://i.imgur.com/whhzo8f.png"))
itemlist.append(item.clone(title="Deportes", action="entradas", url="%s/deportes/" % host,
fanart="http://i.imgur.com/ggFFR8o.png"))
itemlist.append(item.clone(title="", action=""))
itemlist.append(item.clone(title="Buscar...", action="search"))
itemlist.append(item.clone(action="configuracion", title="Configurar canal...", text_color="gold", folder=False))
return itemlist
def configuracion(item):
from platformcode import platformtools
ret = platformtools.show_channel_settings()
platformtools.itemlist_refresh()
return ret
def search(item, texto):
logger.info()
try:
item.url = "%s/?s=%s" % (host, texto)
return busqueda(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def busqueda(item):
logger.info()
itemlist = []
data = get_data(item.url)
contenido = ['Películas', 'Series', 'Documentales', 'Anime', 'Deportes', 'Miniseries', 'Vídeos']
bloque = scrapertools.find_single_match(data, '<div id="content" role="main">(.*?)<div id="sidebar" '
'role="complementary">')
patron = '<a class="clip-link".*?href="([^"]+)".*?<img alt="([^"]+)" src="([^"]+)"' \
'.*?<span class="overlay.*?>(.*?)<.*?<p class="stats">(.*?)</p>'
matches = scrapertools.find_multiple_matches(bloque, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, info, scrapedcat in matches:
if not [True for c in contenido if c in scrapedcat]:
continue
scrapedurl = urllib.unquote(re.sub(r'&amp;b=4|/go\.php\?u=', '', scrapedurl))
scrapedthumbnail = urllib.unquote(re.sub(r'&amp;b=4|/go\.php\?u=', '', scrapedthumbnail))
if not scrapedthumbnail.startswith("http"):
scrapedthumbnail = "http:" + scrapedthumbnail
scrapedthumbnail = scrapedthumbnail.replace("-129x180", "")
if ("Películas" in scrapedcat or "Documentales" in scrapedcat) and "Series" not in scrapedcat:
titulo = scrapedtitle.split("[")[0]
if info:
scrapedtitle += " [%s]" % unicode(info, "utf-8").capitalize().encode("utf-8")
itemlist.append(item.clone(action="findvideos", title=scrapedtitle, url=scrapedurl, contentTitle=titulo,
thumbnail=scrapedthumbnail, fulltitle=titulo, contentType="movie"))
else:
itemlist.append(item.clone(action="episodios", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, fulltitle=scrapedtitle, contentTitle=scrapedtitle,
show=scrapedtitle, contentType="tvshow"))
next_page = scrapertools.find_single_match(data, '<a class="nextpostslink".*?href="([^"]+)"')
if next_page:
next_page = urllib.unquote(re.sub(r'&amp;b=4|/go\.php\?u=', '', next_page))
itemlist.append(item.clone(action="busqueda", title=">> Siguiente", url=next_page))
return itemlist
def lista(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Novedades", action="entradas", url="%s/peliculas" % host))
itemlist.append(item.clone(title="Estrenos", action="entradas", url="%s/peliculas/estrenos" % host))
itemlist.append(item.clone(title="Dvdrip", action="entradas", url="%s/peliculas/dvdrip" % host))
itemlist.append(item.clone(title="HD (720p/1080p)", action="entradas", url="%s/peliculas/hd" % host))
itemlist.append(item.clone(title="HDRIP", action="entradas", url="%s/peliculas/hdrip" % host))
itemlist.append(item.clone(title="Latino", action="entradas",
url="%s/peliculas/latino-peliculas" % host))
itemlist.append(item.clone(title="VOSE", action="entradas", url="%s/peliculas/subtituladas" % host))
itemlist.append(item.clone(title="3D", action="entradas", url="%s/peliculas/3d" % host))
return itemlist
def lista_series(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Novedades", action="entradas", url="%s/series/" % host))
itemlist.append(item.clone(title="Miniseries", action="entradas", url="%s/series/miniseries" % host))
return itemlist
def entradas(item):
logger.info()
itemlist = []
item.text_color = color2
data = get_data(item.url)
bloque = scrapertools.find_single_match(data, '<div id="content" role="main">(.*?)<div id="sidebar" '
'role="complementary">')
contenido = ["series", "deportes", "anime", 'miniseries']
c_match = [True for match in contenido if match in item.url]
# Patron dependiendo del contenido
if True in c_match:
patron = '<a class="clip-link".*?href="([^"]+)".*?<img alt="([^"]+)" src="([^"]+)"' \
'.*?<span class="overlay(|[^"]+)">'
matches = scrapertools.find_multiple_matches(bloque, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, scrapedinfo in matches:
scrapedurl = urllib.unquote(re.sub(r'&amp;b=4|/go\.php\?u=', '', scrapedurl))
if scrapedinfo != "":
scrapedinfo = scrapedinfo.replace(" ", "").replace("-", " ")
scrapedinfo = " [%s]" % unicode(scrapedinfo, "utf-8").capitalize().encode("utf-8")
titulo = scrapedtitle + scrapedinfo
titulo = scrapertools.decodeHtmlentities(titulo)
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle)
scrapedthumbnail = urllib.unquote(re.sub(r'&amp;b=4|/go\.php\?u=', '', scrapedthumbnail))
if not scrapedthumbnail.startswith("http"):
scrapedthumbnail = "http:" + scrapedthumbnail
scrapedthumbnail = scrapedthumbnail.replace("-129x180", "")
scrapedthumbnail = scrapedthumbnail.rsplit("/", 1)[0] + "/" + \
urllib.quote(scrapedthumbnail.rsplit("/", 1)[1])
if "series" in item.url or "anime" in item.url:
item.show = scrapedtitle
itemlist.append(item.clone(action="episodios", title=titulo, url=scrapedurl, thumbnail=scrapedthumbnail,
fulltitle=scrapedtitle, contentTitle=scrapedtitle, contentType="tvshow"))
else:
patron = '<a class="clip-link".*?href="([^"]+)".*?<img alt="([^"]+)" src="([^"]+)"' \
'.*?<span class="overlay.*?>(.*?)<.*?<p class="stats">(.*?)</p>'
matches = scrapertools.find_multiple_matches(bloque, patron)
for scrapedurl, scrapedtitle, scrapedthumbnail, info, categoria in matches:
scrapedurl = urllib.unquote(re.sub(r'&amp;b=4|/go\.php\?u=', '', scrapedurl))
titulo = scrapertools.decodeHtmlentities(scrapedtitle)
scrapedtitle = scrapertools.decodeHtmlentities(scrapedtitle.split("[")[0])
action = "findvideos"
show = ""
if "Series" in categoria:
action = "episodios"
show = scrapedtitle
elif categoria and categoria != "Películas" and categoria != "Documentales":
try:
titulo += " [%s]" % categoria.rsplit(", ", 1)[1]
except:
titulo += " [%s]" % categoria
if 'l-espmini' in info:
titulo += " [ESP]"
if 'l-latmini' in info:
titulo += " [LAT]"
if 'l-vosemini' in info:
titulo += " [VOSE]"
if info:
titulo += " [%s]" % unicode(info, "utf-8").capitalize().encode("utf-8")
scrapedthumbnail = urllib.unquote(re.sub(r'&amp;b=4|/go\.php\?u=', '', scrapedthumbnail))
if not scrapedthumbnail.startswith("http"):
scrapedthumbnail = "http:" + scrapedthumbnail
scrapedthumbnail = scrapedthumbnail.replace("-129x180", "")
scrapedthumbnail = scrapedthumbnail.rsplit("/", 1)[0] + "/" + \
urllib.quote(scrapedthumbnail.rsplit("/", 1)[1])
itemlist.append(item.clone(action=action, title=titulo, url=scrapedurl, thumbnail=scrapedthumbnail,
fulltitle=scrapedtitle, contentTitle=scrapedtitle, viewmode="movie_with_plot",
show=show, contentType="movie"))
# Paginación
next_page = scrapertools.find_single_match(data, '<a class="nextpostslink".*?href="([^"]+)"')
if next_page:
next_page = urllib.unquote(re.sub(r'&amp;b=4|/go\.php\?u=', '', next_page))
itemlist.append(item.clone(title=">> Siguiente", url=next_page, text_color=color3))
return itemlist
def episodios(item):
logger.info()
itemlist = []
data = get_data(item.url)
patron = '(<ul class="menu" id="seasons-list">.*?<div class="section-box related-posts">)'
bloque = scrapertools.find_single_match(data, patron)
matches = scrapertools.find_multiple_matches(bloque, '<div class="polo".*?>(.*?)</div>')
for scrapedtitle in matches:
scrapedtitle = scrapedtitle.strip()
new_item = item.clone()
new_item.infoLabels['season'] = scrapedtitle.split(" ", 1)[0].split("x")[0]
new_item.infoLabels['episode'] = scrapedtitle.split(" ", 1)[0].split("x")[1]
if item.fulltitle != "Añadir esta serie a la videoteca":
title = item.fulltitle + " " + scrapedtitle.strip()
else:
title = scrapedtitle.strip()
itemlist.append(new_item.clone(action="findvideos", title=title, extra=scrapedtitle, fulltitle=title,
contentType="episode"))
itemlist.sort(key=lambda it: it.title, reverse=True)
item.plot = scrapertools.find_single_match(data, '<strong>SINOPSIS</strong>:(.*?)</p>')
if item.show != "" and item.extra == "":
itemlist.append(item.clone(channel="trailertools", title="Buscar Tráiler", action="buscartrailer", context="",
text_color="magenta"))
if config.get_videolibrary_support():
itemlist.append(Item(channel=item.channel, title="Añadir esta serie a la videoteca", url=item.url,
action="add_serie_to_library", extra="episodios", show=item.show,
text_color="green"))
try:
from core import tmdb
tmdb.set_infoLabels_itemlist(itemlist[:-2], __modo_grafico__)
except:
pass
return itemlist
def epienlaces(item):
logger.info()
itemlist = []
item.text_color = color3
data = get_data(item.url)
data = data.replace("\n", "").replace("\t", "")
# Bloque de enlaces
patron = '<div class="polo".*?>%s(.*?)(?:<div class="polo"|</li>)' % item.extra.strip()
bloque = scrapertools.find_single_match(data, patron)
patron = '<div class="episode-server">.*?data-sourcelk="([^"]+)"' \
'.*?data-server="([^"]+)"' \
'.*?<div class="caliycola">(.*?)</div>'
matches = scrapertools.find_multiple_matches(bloque, patron)
itemlist.append(item.clone(action="", title="Enlaces Online/Descarga", text_color=color1))
lista_enlaces = []
for scrapedurl, scrapedserver, scrapedcalidad in matches:
if scrapedserver == "ul":
scrapedserver = "uploadedto"
if scrapedserver == "streamin":
scrapedserver = "streaminto"
titulo = " %s [%s]" % (unicode(scrapedserver, "utf-8").capitalize().encode("utf-8"), scrapedcalidad)
# Enlaces descarga
if scrapedserver == "magnet":
itemlist.insert(0,
item.clone(action="play", title=titulo, server="torrent", url=scrapedurl, extra=item.url))
else:
if servertools.is_server_enabled(scrapedserver):
try:
servers_module = __import__("servers." + scrapedserver)
lista_enlaces.append(item.clone(action="play", title=titulo, server=scrapedserver, url=scrapedurl,
extra=item.url))
except:
pass
lista_enlaces.reverse()
itemlist.extend(lista_enlaces)
if itemlist[0].server == "torrent":
itemlist.insert(0, item.clone(action="", title="Enlaces Torrent", text_color=color1))
return itemlist
def findvideos(item):
logger.info()
if (item.extra and item.extra != "findvideos") or item.path:
return epienlaces(item)
itemlist = []
item.text_color = color3
data = get_data(item.url)
item.plot = scrapertools.find_single_match(data, 'SINOPSIS(?:</span>|</strong>):(.*?)</p>')
year = scrapertools.find_single_match(data, '(?:<span class="bold">|<strong>)AÑO(?:</span>|</strong>):\s*(\d+)')
if year:
try:
from core import tmdb
item.infoLabels['year'] = year
tmdb.set_infoLabels_item(item, __modo_grafico__)
except:
pass
old_format = False
# Patron torrent antiguo formato
if "Enlaces de descarga</div>" in data:
old_format = True
matches = scrapertools.find_multiple_matches(data, 'class="separate3 magnet".*?href="([^"]+)"')
for scrapedurl in matches:
scrapedurl = scrapertools.find_single_match(scrapedurl, '(magnet.*)')
scrapedurl = urllib.unquote(re.sub(r'&amp;b=4', '', scrapedurl))
title = "[Torrent] "
title += urllib.unquote(scrapertools.find_single_match(scrapedurl, 'dn=(.*?)(?i)WWW.DescargasMix'))
itemlist.append(item.clone(action="play", server="torrent", title=title, url=scrapedurl,
text_color="green"))
# Patron online
data_online = scrapertools.find_single_match(data, 'Ver online</div>(.*?)<div class="section-box related-posts">')
if data_online:
title = "Enlaces Online"
if '"l-latino2"' in data_online:
title += " [LAT]"
elif '"l-esp2"' in data_online:
title += " [ESP]"
elif '"l-vose2"' in data_online:
title += " [VOSE]"
patron = 'make_links.*?,[\'"]([^"\']+)["\']'
matches = scrapertools.find_multiple_matches(data_online, patron)
for i, code in enumerate(matches):
enlace = mostrar_enlaces(code)
enlaces = servertools.findvideos(data=enlace[0])
if enlaces and "peliculas.nu" not in enlaces:
if i == 0:
extra_info = scrapertools.find_single_match(data_online, '<span class="tooltiptext">(.*?)</span>')
size = scrapertools.find_single_match(data_online, '(?i)TAMAÑO:\s*(.*?)<').strip()
if size:
title += " [%s]" % size
new_item = item.clone(title=title, action="", text_color=color1)
if extra_info:
extra_info = scrapertools.htmlclean(extra_info)
new_item.infoLabels["plot"] = extra_info
new_item.title += " +INFO"
itemlist.append(new_item)
title = " Ver vídeo en " + enlaces[0][2]
itemlist.append(item.clone(action="play", server=enlaces[0][2], title=title, url=enlaces[0][1]))
scriptg = scrapertools.find_single_match(data, "<script type='text/javascript'>str='([^']+)'")
if scriptg:
gvideo = urllib.unquote_plus(scriptg.replace("@", "%"))
url = scrapertools.find_single_match(gvideo, 'src="([^"]+)"')
if url:
itemlist.append(item.clone(action="play", server="directo", url=url, extra=item.url,
title=" Ver vídeo en Googlevideo (Máxima calidad)"))
# Patron descarga
patron = '<div class="(?:floatLeft |)double(?:nuevo|)">(.*?)</div>(.*?)' \
'(?:<div(?: id="mirrors"|) class="(?:contentModuleSmall |)mirrors">|<div class="section-box related-' \
'posts">)'
bloques_descarga = scrapertools.find_multiple_matches(data, patron)
for title_bloque, bloque in bloques_descarga:
if title_bloque == "Ver online":
continue
if '"l-latino2"' in bloque:
title_bloque += " [LAT]"
elif '"l-esp2"' in bloque:
title_bloque += " [ESP]"
elif '"l-vose2"' in bloque:
title_bloque += " [VOSE]"
extra_info = scrapertools.find_single_match(bloque, '<span class="tooltiptext">(.*?)</span>')
size = scrapertools.find_single_match(bloque, '(?i)TAMAÑO:\s*(.*?)<').strip()
if size:
title_bloque += " [%s]" % size
new_item = item.clone(title=title_bloque, action="", text_color=color1)
if extra_info:
extra_info = scrapertools.htmlclean(extra_info)
new_item.infoLabels["plot"] = extra_info
new_item.title += " +INFO"
itemlist.append(new_item)
if '<div class="subiendo">' in bloque:
itemlist.append(item.clone(title=" Los enlaces se están subiendo", action=""))
continue
patron = 'class="separate.*? ([^"]+)".*?(?:make_links.*?,|href=)[\'"]([^"\']+)["\']'
matches = scrapertools.find_multiple_matches(bloque, patron)
for scrapedserver, scrapedurl in matches:
if (scrapedserver == "ul") | (scrapedserver == "uploaded"):
scrapedserver = "uploadedto"
titulo = unicode(scrapedserver, "utf-8").capitalize().encode("utf-8")
if titulo == "Magnet" and old_format:
continue
elif titulo == "Magnet" and not old_format:
title = " Enlace Torrent"
scrapedurl = scrapertools.find_single_match(scrapedurl, '(magnet.*)')
scrapedurl = urllib.unquote(re.sub(r'&amp;b=4', '', scrapedurl))
itemlist.append(item.clone(action="play", server="torrent", title=title, url=scrapedurl,
text_color="green"))
continue
if servertools.is_server_enabled(scrapedserver):
try:
servers_module = __import__("servers." + scrapedserver)
# Saca numero de enlaces
urls = mostrar_enlaces(scrapedurl)
numero = str(len(urls))
titulo = " %s - Nº enlaces: %s" % (titulo, numero)
itemlist.append(item.clone(action="enlaces", title=titulo, extra=scrapedurl, server=scrapedserver))
except:
pass
itemlist.append(item.clone(channel="trailertools", title="Buscar Tráiler", action="buscartrailer", context="",
text_color="magenta"))
if item.extra != "findvideos" and config.get_videolibrary_support():
itemlist.append(Item(channel=item.channel, title="Añadir a la videoteca", action="add_pelicula_to_library",
extra="findvideos", url=item.url, infoLabels={'title': item.fulltitle},
fulltitle=item.fulltitle, text_color="green"))
return itemlist
def play(item):
logger.info()
itemlist = []
if not item.url.startswith("http") and not item.url.startswith("magnet"):
post = "source=%s&action=obtenerurl" % urllib.quote(item.url)
headers = {'X-Requested-With': 'XMLHttpRequest'}
data = httptools.downloadpage("%s/wp-admin/admin-ajax.php" % host.replace("https", "http"), post=post,
headers=headers, follow_redirects=False).data
url = scrapertools.find_single_match(data, 'url":"([^"]+)"').replace("\\", "")
if "enlacesmix" in url:
data = httptools.downloadpage(url, headers={'Referer': item.extra}, follow_redirects=False).data
url = scrapertools.find_single_match(data, '<iframe.*?src="([^"]+)"')
enlaces = servertools.findvideosbyserver(url, item.server)
if enlaces:
itemlist.append(item.clone(action="play", server=enlaces[0][2], url=enlaces[0][1]))
else:
itemlist.append(item.clone())
return itemlist
def enlaces(item):
logger.info()
itemlist = []
urls = mostrar_enlaces(item.extra)
numero = len(urls)
for enlace in urls:
enlaces = servertools.findvideos(data=enlace)
if enlaces:
for link in enlaces:
if "/folder/" in enlace:
titulo = link[0]
else:
titulo = "%s - Enlace %s" % (item.title.split("-")[0], str(numero))
numero -= 1
itemlist.append(item.clone(action="play", server=link[2], title=titulo, url=link[1]))
itemlist.sort(key=lambda it: it.title)
return itemlist
def mostrar_enlaces(data):
import base64
data = data.split(",")
len_data = len(data)
urls = []
for i in range(0, len_data):
url = []
value1 = base64.b64decode(data[i])
value2 = value1.split("-")
for j in range(0, len(value2)):
url.append(chr(int(value2[j])))
urls.append("".join(url))
return urls
def get_data(url_orig, get_host=False):
try:
if config.get_setting("url_error", "descargasmix"):
raise Exception
response = httptools.downloadpage(url_orig)
if not response.data or "urlopen error [Errno 1]" in str(response.code):
raise Exception
if get_host:
if response.url.endswith("/"):
response.url = response.url[:-1]
return response.url
except:
config.set_setting("url_error", True, "descargasmix")
import random
server_random = ['nl', 'de', 'us']
server = server_random[random.randint(0, 2)]
url = "https://%s.hideproxy.me/includes/process.php?action=update" % server
post = "u=%s&proxy_formdata_server=%s&allowCookies=1&encodeURL=0&encodePage=0&stripObjects=0&stripJS=0&go=" \
% (url_orig, server)
while True:
response = httptools.downloadpage(url, post, follow_redirects=False)
if response.headers.get("location"):
url = response.headers["location"]
post = ""
else:
if get_host:
target = urllib.unquote(scrapertools.find_single_match(url, 'u=([^&]+)&'))
if target.endswith("/"):
target = target[:-1]
if target and target != host:
return target
else:
return ""
break
return response.data

24
kodi/channels/discoverymx.json Executable file
View File

@@ -0,0 +1,24 @@
{
"id": "discoverymx",
"name": "Discoverymx",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "discoverymx.png",
"banner": "discoverymx.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"latino",
"documentary"
]
}

173
kodi/channels/discoverymx.py Executable file
View File

@@ -0,0 +1,173 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, title="Documentales - Novedades", action="listvideos",
url="http://discoverymx.blogspot.com/"))
itemlist.append(Item(channel=item.channel, title="Documentales - Series Disponibles", action="DocuSeries",
url="http://discoverymx.blogspot.com/"))
itemlist.append(Item(channel=item.channel, title="Documentales - Tag", action="DocuTag",
url="http://discoverymx.blogspot.com/"))
itemlist.append(Item(channel=item.channel, title="Documentales - Archivo por meses", action="DocuARCHIVO",
url="http://discoverymx.blogspot.com/"))
return itemlist
def DocuSeries(item):
logger.info()
itemlist = []
# Descarga la página
data = scrapertools.cache_page(item.url)
# Extrae las entradas (carpetas)
patronvideos = '<li><b><a href="([^"]+)" target="_blank">([^<]+)</a></b></li>'
matches = re.compile(patronvideos, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
for match in matches:
scrapedurl = match[0]
scrapedtitle = match[1]
scrapedthumbnail = ""
scrapedplot = ""
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(Item(channel=item.channel, action="listvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, folder=True))
return itemlist
def DocuTag(item):
logger.info()
itemlist = []
# Descarga la página
data = scrapertools.cache_page(item.url)
patronvideos = "<a dir='ltr' href='([^']+)'>([^<]+)</a>[^<]+<span class='label-count' dir='ltr'>(.+?)</span>"
matches = re.compile(patronvideos, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
for match in matches:
scrapedurl = match[0]
scrapedtitle = match[1] + " " + match[2]
scrapedthumbnail = ""
scrapedplot = ""
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(Item(channel=item.channel, action="listvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, folder=True))
return itemlist
def DocuARCHIVO(item):
logger.info()
itemlist = []
# Descarga la página
data = scrapertools.cache_page(item.url)
patronvideos = "<a class='post-count-link' href='([^']+)'>([^<]+)</a>[^<]+"
patronvideos += "<span class='post-count' dir='ltr'>(.+?)</span>"
matches = re.compile(patronvideos, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
for match in matches:
scrapedurl = match[0]
scrapedtitle = match[1] + " " + match[2]
scrapedthumbnail = ""
scrapedplot = ""
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(Item(channel=item.channel, action="listvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, folder=True))
return itemlist
def listvideos(item):
logger.info()
itemlist = []
scrapedthumbnail = ""
scrapedplot = ""
# Descarga la página
data = scrapertools.cache_page(item.url)
patronvideos = "<h3 class='post-title entry-title'[^<]+"
patronvideos += "<a href='([^']+)'>([^<]+)</a>.*?"
patronvideos += "<div class='post-body entry-content'(.*?)<div class='post-footer'>"
matches = re.compile(patronvideos, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
for match in matches:
scrapedtitle = match[1]
scrapedtitle = re.sub("<[^>]+>", " ", scrapedtitle)
scrapedtitle = scrapertools.unescape(scrapedtitle)
scrapedurl = match[0]
regexp = re.compile(r'src="(http[^"]+)"')
matchthumb = regexp.search(match[2])
if matchthumb is not None:
scrapedthumbnail = matchthumb.group(1)
matchplot = re.compile('<div align="center">(<img.*?)</span></div>', re.DOTALL).findall(match[2])
if len(matchplot) > 0:
scrapedplot = matchplot[0]
# print matchplot
else:
scrapedplot = ""
scrapedplot = re.sub("<[^>]+>", " ", scrapedplot)
scrapedplot = scrapertools.unescape(scrapedplot)
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
# Añade al listado de XBMC
itemlist.append(Item(channel=item.channel, action="findvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, folder=True))
# Extrae la marca de siguiente página
patronvideos = "<a class='blog-pager-older-link' href='([^']+)'"
matches = re.compile(patronvideos, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
if len(matches) > 0:
scrapedtitle = "Página siguiente"
scrapedurl = urlparse.urljoin(item.url, matches[0])
scrapedthumbnail = ""
scrapedplot = ""
itemlist.append(Item(channel=item.channel, action="listvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, folder=True))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
# Descarga la página
data = scrapertools.cachePage(item.url)
data = scrapertools.get_match(data, "<div class='post-body entry-content'(.*?)<div class='post-footer'>")
# Busca los enlaces a los videos
listavideos = servertools.findvideos(data)
for video in listavideos:
videotitle = scrapertools.unescape(video[0])
url = video[1]
server = video[2]
# xbmctools.addnewvideo( item.channel , "play" , category , server , , url , thumbnail , plot )
itemlist.append(Item(channel=item.channel, action="play", server=server, title=videotitle, url=url,
thumbnail=item.thumbnail, plot=item.plot, fulltitle=item.title, folder=False))
return itemlist

63
kodi/channels/divxatope.json Executable file
View File

@@ -0,0 +1,63 @@
{
"id": "divxatope",
"name": "Divxatope",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "divxatope.png",
"banner": "divxatope.png",
"version": 1,
"changes": [
{
"date": "17/04/17",
"description": "Reparados torrents, añadidas nuevas secciones"
},
{
"date": "13/01/17",
"description": "Reparados torrents y paginacion. Añadida seccion Peliculas 4K ultraHD"
},
{
"date": "31/12/16",
"description": "Adaptado, por cambios en la web"
},
{
"date": "01/07/16",
"description": "Eliminado código innecesario."
},
{
"date": "29/04/16",
"description": "Adaptar a Buscador global y Novedades Peliculas y Series"
}
],
"categories": [
"torrent",
"movie",
"tvshow"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Peliculas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_series",
"type": "bool",
"label": "Incluir en Novedades - Episodios de series",
"default": true,
"enabled": true,
"visible": true
}
]
}

347
kodi/channels/divxatope.py Executable file
View File

@@ -0,0 +1,347 @@
# -*- coding: utf-8 -*-
import re
import urllib
import urlparse
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, action="menu", title="Películas", url="http://www.divxatope1.com/",
extra="Peliculas", folder=True))
itemlist.append(
Item(channel=item.channel, action="menu", title="Series", url="http://www.divxatope1.com", extra="Series",
folder=True))
itemlist.append(Item(channel=item.channel, action="search", title="Buscar..."))
return itemlist
def menu(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
# logger.info("data="+data)
data = scrapertools.find_single_match(data, item.extra + "</a[^<]+<ul(.*?)</ul>")
# logger.info("data="+data)
patron = "<li><a.*?href='([^']+)'[^>]+>([^<]+)</a></li>"
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
title = scrapedtitle
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = ""
plot = ""
itemlist.append(Item(channel=item.channel, action="lista", title=title, url=url, thumbnail=thumbnail, plot=plot,
folder=True))
if title != "Todas las Peliculas":
itemlist.append(
Item(channel=item.channel, action="alfabetico", title=title + " [A-Z]", url=url, thumbnail=thumbnail,
plot=plot, folder=True))
if item.extra == "Peliculas":
title = "4k UltraHD"
url = "http://divxatope1.com/peliculas-hd/4kultrahd/"
thumbnail = ""
plot = ""
itemlist.append(Item(channel=item.channel, action="lista", title=title, url=url, thumbnail=thumbnail, plot=plot,
folder=True))
itemlist.append(
Item(channel=item.channel, action="alfabetico", title=title + " [A-Z]", url=url, thumbnail=thumbnail,
plot=plot,
folder=True))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = "http://www.divxatope1.com/buscar/descargas"
item.extra = urllib.urlencode({'q': texto})
try:
itemlist = lista(item)
# Esta pagina coloca a veces contenido duplicado, intentamos descartarlo
dict_aux = {}
for i in itemlist:
if not i.url in dict_aux:
dict_aux[i.url] = i
else:
itemlist.remove(i)
return itemlist
# Se captura la excepci?n, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def newest(categoria):
itemlist = []
item = Item()
try:
if categoria == 'peliculas':
item.url = "http://www.divxatope1.com/peliculas"
elif categoria == 'series':
item.url = "http://www.divxatope1.com/series"
else:
return []
itemlist = lista(item)
if itemlist[-1].title == ">> Página siguiente":
itemlist.pop()
# Esta pagina coloca a veces contenido duplicado, intentamos descartarlo
dict_aux = {}
for i in itemlist:
if not i.url in dict_aux:
dict_aux[i.url] = i
else:
itemlist.remove(i)
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
# return dict_aux.values()
return itemlist
def alfabetico(item):
logger.info()
itemlist = []
data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)", "", httptools.downloadpage(item.url).data)
data = unicode(data, "iso-8859-1", errors="replace").encode("utf-8")
patron = '<ul class="alfabeto">(.*?)</ul>'
data = scrapertools.get_match(data, patron)
patron = '<a href="([^"]+)"[^>]+>([^>]+)</a>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
title = scrapedtitle.upper()
url = scrapedurl
itemlist.append(Item(channel=item.channel, action="lista", title=title, url=url))
return itemlist
def lista(item):
logger.info()
itemlist = []
# Descarga la pagina
data = httptools.downloadpage(item.url, post=item.extra).data
# logger.info("data="+data)
bloque = scrapertools.find_single_match(data, '(?:<ul class="pelilist">|<ul class="buscar-list">)(.*?)</ul>')
patron = '<li[^<]+'
patron += '<a href="([^"]+)".*?'
patron += 'src="([^"]+)".*?'
patron += '<h2[^>]*>(.*?)</h2.*?'
patron += '(?:<strong[^>]*>|<span[^>]*>)(.*?)(?:</strong>|</span>)'
matches = re.compile(patron, re.DOTALL).findall(bloque)
scrapertools.printMatches(matches)
for scrapedurl, scrapedthumbnail, scrapedtitle, calidad in matches:
scrapedtitle = scrapertools.htmlclean(scrapedtitle)
title = scrapedtitle.strip()
if scrapertools.htmlclean(calidad):
title += " (" + scrapertools.htmlclean(calidad) + ")"
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = urlparse.urljoin(item.url, scrapedthumbnail)
plot = ""
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
contentTitle = scrapertools.htmlclean(scrapedtitle).strip()
patron = '([^<]+)<br>'
matches = re.compile(patron, re.DOTALL).findall(calidad + '<br>')
idioma = ''
if "divxatope1.com/serie" in url:
contentTitle = re.sub('\s+-|\.{3}$', '', contentTitle)
capitulo = ''
temporada = 0
episodio = 0
if len(matches) == 3:
calidad = matches[0].strip()
idioma = matches[1].strip()
capitulo = matches[2].replace('Cap', 'x').replace('Temp', '').replace(' ', '')
temporada, episodio = capitulo.strip().split('x')
itemlist.append(Item(channel=item.channel, action="episodios", title=title, fulltitle=title, url=url,
thumbnail=thumbnail, plot=plot, folder=True, contentTitle=contentTitle,
language=idioma, contentSeason=int(temporada),
contentEpisodeNumber=int(episodio), contentQuality=calidad))
else:
if len(matches) == 2:
calidad = matches[0].strip()
idioma = matches[1].strip()
itemlist.append(Item(channel=item.channel, action="findvideos", title=title, fulltitle=title, url=url,
thumbnail=thumbnail, plot=plot, folder=True, contentTitle=contentTitle,
language=idioma, contentThumbnail=thumbnail, contentQuality=calidad))
next_page_url = scrapertools.find_single_match(data, '<li><a href="([^"]+)">Next</a></li>')
if next_page_url != "":
itemlist.append(Item(channel=item.channel, action="lista", title=">> Página siguiente",
url=urlparse.urljoin(item.url, next_page_url), folder=True))
else:
next_page_url = scrapertools.find_single_match(data,
'<li><input type="button" class="btn-submit" value="Siguiente" onClick="paginar..(\d+)')
if next_page_url != "":
itemlist.append(Item(channel=item.channel, action="lista", title=">> Página siguiente", url=item.url,
extra=item.extra + "&pg=" + next_page_url, folder=True))
return itemlist
def episodios(item):
logger.info()
itemlist = []
# Descarga la pagina
data = httptools.downloadpage(item.url, post=item.extra).data
# logger.info("data="+data)
patron = '<div class="chap-desc"[^<]+'
patron += '<a class="chap-title".*?href="([^"]+)" title="([^"]+)"[^<]+'
matches = re.compile(patron, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
for scrapedurl, scrapedtitle in matches:
title = scrapedtitle.strip()
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = ""
plot = ""
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="findvideos", title=title, fulltitle=title, url=url, thumbnail=thumbnail,
plot=plot, folder=True))
next_page_url = scrapertools.find_single_match(data, "<a class='active' href=[^<]+</a><a\s*href='([^']+)'")
if next_page_url != "":
itemlist.append(Item(channel=item.channel, action="episodios", title=">> Página siguiente",
url=urlparse.urljoin(item.url, next_page_url), folder=True))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
# Descarga la pagina
data = httptools.downloadpage(item.url).data
item.plot = scrapertools.find_single_match(data, '<div class="post-entry" style="height:300px;">(.*?)</div>')
item.plot = scrapertools.htmlclean(item.plot).strip()
item.contentPlot = item.plot
link = scrapertools.find_single_match(data, 'href="http://(?:tumejorserie|tumejorjuego).*?link=([^"]+)"')
if link != "":
link = "http://www.divxatope1.com/" + link
logger.info("torrent=" + link)
itemlist.append(
Item(channel=item.channel, action="play", server="torrent", title="Vídeo en torrent", fulltitle=item.title,
url=link, thumbnail=servertools.guess_server_thumbnail("torrent"), plot=item.plot, folder=False,
parentContent=item))
patron = "<div class=\"box1\"[^<]+<img[^<]+</div[^<]+"
patron += '<div class="box2">([^<]+)</div[^<]+'
patron += '<div class="box3">([^<]+)</div[^<]+'
patron += '<div class="box4">([^<]+)</div[^<]+'
patron += '<div class="box5">(.*?)</div[^<]+'
patron += '<div class="box6">([^<]+)<'
matches = re.compile(patron, re.DOTALL).findall(data)
scrapertools.printMatches(matches)
itemlist_ver = []
itemlist_descargar = []
for servername, idioma, calidad, scrapedurl, comentarios in matches:
title = "Mirror en " + servername + " (" + calidad + ")" + " (" + idioma + ")"
servername = servername.replace("uploaded", "uploadedto").replace("1fichier", "onefichier")
if comentarios.strip() != "":
title = title + " (" + comentarios.strip() + ")"
url = urlparse.urljoin(item.url, scrapedurl)
mostrar_server = servertools.is_server_enabled(servername)
if mostrar_server:
thumbnail = servertools.guess_server_thumbnail(title)
plot = ""
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
action = "play"
if "partes" in title:
action = "extract_url"
new_item = Item(channel=item.channel, action=action, title=title, fulltitle=title, url=url,
thumbnail=thumbnail, plot=plot, parentContent=item)
if comentarios.startswith("Ver en"):
itemlist_ver.append(new_item)
else:
itemlist_descargar.append(new_item)
for new_item in itemlist_ver:
itemlist.append(new_item)
for new_item in itemlist_descargar:
itemlist.append(new_item)
return itemlist
def extract_url(item):
logger.info()
itemlist = servertools.find_video_items(data=item.url)
for videoitem in itemlist:
videoitem.title = "Enlace encontrado en " + videoitem.server + " (" + scrapertools.get_filename_from_url(
videoitem.url) + ")"
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
return itemlist
def play(item):
logger.info()
if item.server != "torrent":
itemlist = servertools.find_video_items(data=item.url)
for videoitem in itemlist:
videoitem.title = "Enlace encontrado en " + videoitem.server + " (" + scrapertools.get_filename_from_url(
videoitem.url) + ")"
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
else:
itemlist = [item]
return itemlist

42
kodi/channels/divxtotal.json Executable file
View File

@@ -0,0 +1,42 @@
{
"id": "divxtotal",
"name": "Divxtotal",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://imgur.com/Madj03A.jpg",
"version": 1,
"changes": [
{
"date": "26/04/2017",
"description": "Release"
},
{
"date": "28/06/2017",
"description": "Correciones código por cambios web"
}
],
"categories": [
"torrent",
"movie",
"tvshow"
],
"settings": [
{
"id": "modo_grafico",
"type": "bool",
"label": "Buscar información extra",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}

1025
kodi/channels/divxtotal.py Executable file

File diff suppressed because it is too large Load Diff

58
kodi/channels/documaniatv.json Executable file
View File

@@ -0,0 +1,58 @@
{
"id": "documaniatv",
"name": "DocumaniaTV",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://i.imgur.com/qMR9sg9.png",
"banner": "documaniatv.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "11/07/2016",
"description": "Reparadas cuentas de usuario."
}
],
"categories": [
"documentary"
],
"settings": [
{
"id": "include_in_newest_documentales",
"type": "bool",
"label": "Incluir en Novedades - Documentales",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "documaniatvaccount",
"type": "bool",
"label": "Usar cuenta de documaniatv",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "documaniatvuser",
"type": "text",
"label": "Usuario",
"color": "0xFFd50b0b",
"enabled": "eq(-1,true)",
"visible": true
},
{
"id": "documaniatvpassword",
"type": "text",
"label": "Contraseña",
"color": "0xFFd50b0b",
"enabled": "!eq(-1,)+eq(-2,true)",
"visible": true,
"hidden": true
}
]
}

374
kodi/channels/documaniatv.py Executable file
View File

@@ -0,0 +1,374 @@
# -*- coding: utf-8 -*-
import re
import urllib
import urlparse
from core import config
from core import jsontools
from core import logger
from core import scrapertools
from core.item import Item
host = "http://www.documaniatv.com/"
account = config.get_setting("documaniatvaccount", "documaniatv")
headers = [['User-Agent', 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0'],
['Referer', host]]
def login():
logger.info()
user = config.get_setting("documaniatvuser", "documaniatv")
password = config.get_setting("documaniatvpassword", "documaniatv")
if user == "" or password == "":
return True, ""
data = scrapertools.cachePage(host, headers=headers)
if "http://www.documaniatv.com/user/" + user in data:
return False, user
post = "username=%s&pass=%s&Login=Iniciar Sesión" % (user, password)
data = scrapertools.cachePage("http://www.documaniatv.com/login.php", headers=headers, post=post)
if "Nombre de usuario o contraseña incorrectas" in data:
logger.error("login erróneo")
return True, ""
return False, user
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(action="novedades", title="Novedades", url="http://www.documaniatv.com/newvideos.html"))
itemlist.append(
item.clone(action="categorias", title="Categorías y Canales", url="http://www.documaniatv.com/browse.html"))
itemlist.append(item.clone(action="novedades", title="Top", url="http://www.documaniatv.com/topvideos.html"))
itemlist.append(item.clone(action="categorias", title="Series Documentales",
url="http://www.documaniatv.com/top-series-documentales-html"))
itemlist.append(item.clone(action="viendo", title="Viendo ahora", url="http://www.documaniatv.com"))
itemlist.append(item.clone(action="", title=""))
itemlist.append(item.clone(action="search", title="Buscar"))
folder = False
action = ""
if account:
error, user = login()
if error:
title = "Playlists Personales (Error en usuario y/o contraseña)"
else:
title = "Playlists Personales (Logueado)"
action = "usuario"
folder = True
else:
title = "Playlists Personales (Sin cuenta configurada)"
user = ""
url = "http://www.documaniatv.com/user/%s" % user
itemlist.append(item.clone(title=title, action=action, url=url, folder=folder))
itemlist.append(item.clone(title="Configurar canal...", text_color="gold", action="configuracion",
folder=False))
return itemlist
def configuracion(item):
from platformcode import platformtools
platformtools.show_channel_settings()
if config.is_xbmc():
import xbmc
xbmc.executebuiltin("Container.Refresh")
def newest(categoria):
itemlist = []
item = Item()
try:
if categoria == 'documentales':
item.url = "http://www.documaniatv.com/newvideos.html"
itemlist = novedades(item)
if itemlist[-1].action == "novedades":
itemlist.pop()
# Se captura la excepción, para no interrumpir al canal novedades si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def search(item, texto):
logger.info()
data = scrapertools.cachePage(host, headers=headers)
item.url = scrapertools.find_single_match(data, 'form action="([^"]+)"') + "?keywords=%s&video-id="
texto = texto.replace(" ", "+")
item.url = item.url % texto
try:
return novedades(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def novedades(item):
logger.info()
itemlist = []
# Descarga la pagina
data = scrapertools.cachePage(item.url, headers=headers)
# Saca el plot si lo tuviese
scrapedplot = scrapertools.find_single_match(data, '<div class="pm-section-head">(.*?)</div>')
if "<div" in scrapedplot:
scrapedplot = ""
else:
scrapedplot = scrapertools.htmlclean(scrapedplot)
bloque = scrapertools.find_multiple_matches(data, '<li class="col-xs-[\d] col-sm-[\d] col-md-[\d]">(.*?)</li>')
if "Registrarse" in data or not account:
for match in bloque:
patron = '<span class="pm-label-duration">(.*?)</span>.*?<a href="([^"]+)"' \
'.*?title="([^"]+)".*?data-echo="([^"]+)"'
matches = scrapertools.find_multiple_matches(match, patron)
for duracion, scrapedurl, scrapedtitle, scrapedthumbnail in matches:
contentTitle = scrapedtitle[:]
scrapedtitle += " [" + duracion + "]"
if not scrapedthumbnail.startswith("data:image"):
scrapedthumbnail += "|" + headers[0][0] + "=" + headers[0][1]
else:
scrapedthumbnail = item.thumbnail
logger.debug(
"title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(item.clone(action="play_", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, fanart=scrapedthumbnail, plot=scrapedplot,
fulltitle=scrapedtitle, contentTitle=contentTitle, folder=False))
else:
for match in bloque:
patron = '<span class="pm-label-duration">(.*?)</span>.*?onclick="watch_later_add\(([\d]+)\)' \
'.*?<a href="([^"]+)".*?title="([^"]+)".*?data-echo="([^"]+)"'
matches = scrapertools.find_multiple_matches(match, patron)
for duracion, video_id, scrapedurl, scrapedtitle, scrapedthumbnail in matches:
contentTitle = scrapedtitle[:]
scrapedtitle += " [" + duracion + "]"
if not scrapedthumbnail.startswith("data:image"):
scrapedthumbnail += "|" + headers[0][0] + "=" + headers[0][1]
else:
scrapedthumbnail = item.thumbnail
logger.debug(
"title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(item.clone(action="findvideos", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, fanart=scrapedthumbnail, plot=scrapedplot,
id=video_id,
fulltitle=scrapedtitle, contentTitle=contentTitle))
# Busca enlaces de paginas siguientes...
try:
next_page_url = scrapertools.get_match(data, '<a href="([^"]+)">&raquo;</a>')
next_page_url = urlparse.urljoin(host, next_page_url)
itemlist.append(item.clone(action="novedades", title=">> Página siguiente", url=next_page_url))
except:
logger.error("Siguiente pagina no encontrada")
return itemlist
def categorias(item):
logger.info()
itemlist = []
data = scrapertools.cachePage(item.url, headers=headers)
patron = '<div class="pm-li-category">.*?<a href="([^"]+)"' \
'.*?<img src="([^"]+)".*?<h3>(?:<a.*?><span.*?>|)(.*?)<'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
if not scrapedthumbnail.startswith("data:image"):
scrapedthumbnail += "|" + headers[0][0] + "=" + headers[0][1]
else:
scrapedthumbnail = item.thumbnail
itemlist.append(item.clone(action="novedades", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
fanart=scrapedthumbnail))
# Busca enlaces de paginas siguientes...
next_page_url = scrapertools.find_single_match(data, '<a href="([^"]+)"><i class="fa fa-arrow-right">')
if next_page_url != "":
itemlist.append(item.clone(action="categorias", title=">> Página siguiente", url=next_page_url))
return itemlist
def viendo(item):
logger.info()
itemlist = []
# Descarga la pagina
data = scrapertools.cachePage(item.url, headers=headers)
bloque = scrapertools.find_single_match(data, '<ul class="pm-ul-carousel-videos list-inline"(.*?)</ul>')
patron = '<span class="pm-label-duration">(.*?)</span>.*?<a href="([^"]+)"' \
'.*?title="([^"]+)".*?data-echo="([^"]+)"'
matches = scrapertools.find_multiple_matches(bloque, patron)
for duracion, scrapedurl, scrapedtitle, scrapedthumbnail in matches:
scrapedtitle += " [" + duracion + "]"
if not scrapedthumbnail.startswith("data:image"):
scrapedthumbnail += "|" + headers[0][0] + "=" + headers[0][1]
else:
scrapedthumbnail = item.thumbnail
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(item.clone(action="play_", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
fanart=scrapedthumbnail, fulltitle=scrapedtitle))
return itemlist
def findvideos(item):
logger.info()
itemlist = []
# Se comprueba si el vídeo está ya en favoritos/ver más tarde
url = "http://www.documaniatv.com/ajax.php?p=playlists&do=video-watch-load-my-playlists&video-id=%s" % item.id
data = scrapertools.cachePage(url, headers=headers)
data = jsontools.load(data)
data = re.sub(r"\n|\r|\t", '', data['html'])
itemlist.append(item.clone(action="play_", title=">> Reproducir vídeo", folder=False))
if "kodi" in config.get_platform():
folder = False
else:
folder = True
patron = '<li data-playlist-id="([^"]+)".*?onclick="playlist_(\w+)_item' \
'.*?<span class="pm-playlists-name">(.*?)</span>.*?' \
'<span class="pm-playlists-video-count">(.*?)</span>'
matches = scrapertools.find_multiple_matches(data, patron)
for playlist_id, playlist_action, playlist_title, video_count in matches:
scrapedtitle = playlist_action.replace('remove', 'Eliminar de ').replace('add', 'Añadir a ')
scrapedtitle += playlist_title + " (" + video_count + ")"
itemlist.append(item.clone(action="acciones_playlist", title=scrapedtitle, list_id=playlist_id,
url="http://www.documaniatv.com/ajax.php", folder=folder))
if "kodi" in config.get_platform():
itemlist.append(item.clone(action="acciones_playlist", title="Crear una nueva playlist y añadir el documental",
id=item.id, url="http://www.documaniatv.com/ajax.php", folder=folder))
itemlist.append(item.clone(action="acciones_playlist", title="Me gusta", url="http://www.documaniatv.com/ajax.php",
folder=folder))
return itemlist
def play_(item):
logger.info()
itemlist = []
try:
import xbmc
if not xbmc.getCondVisibility('System.HasAddon(script.cnubis)'):
from platformcode import platformtools
platformtools.dialog_ok("Addon no encontrado",
"Para ver vídeos alojados en cnubis necesitas tener su instalado su add-on",
line3="Descárgalo en http://cnubis.com/kodi-pelisalacarta.html")
return itemlist
except:
pass
# Descarga la pagina
data = scrapertools.cachePage(item.url, headers=headers)
# Busca enlace directo
video_url = scrapertools.find_single_match(data, 'class="embedded-video"[^<]+<iframe.*?src="([^"]+)"')
if config.get_platform() == "plex" or config.get_platform() == "mediaserver":
code = scrapertools.find_single_match(video_url, 'u=([A-z0-9]+)')
url = "http://cnubis.com/plugins/mediaplayer/embeder/_embedkodi.php?u=%s" % code
data = scrapertools.downloadpage(url, headers=headers)
video_url = scrapertools.find_single_match(data, 'file\s*:\s*"([^"]+)"')
itemlist.append(item.clone(action="play", url=video_url, server="directo"))
return itemlist
cnubis_script = xbmc.translatePath("special://home/addons/script.cnubis/default.py")
xbmc.executebuiltin("XBMC.RunScript(%s, url=%s&referer=%s&title=%s)"
% (cnubis_script, urllib.quote_plus(video_url), urllib.quote_plus(item.url),
item.fulltitle))
return itemlist
def usuario(item):
logger.info()
itemlist = []
data = scrapertools.cachePage(item.url, headers=headers)
profile_id = scrapertools.find_single_match(data, 'data-profile-id="([^"]+)"')
url = "http://www.documaniatv.com/ajax.php?p=profile&do=profile-load-playlists&uid=%s" % profile_id
data = scrapertools.cachePage(url, headers=headers)
data = jsontools.load(data)
data = data['html']
patron = '<div class="pm-video-thumb">.*?src="([^"]+)".*?' \
'<span class="pm-pl-items">(.*?)</span>(.*?)</div>' \
'.*?<h3.*?href="([^"]+)".*?title="([^"]+)"'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedthumbnail, items, videos, scrapedurl, scrapedtitle in matches:
scrapedtitle = scrapedtitle.replace("Historia", 'Historial')
scrapedtitle += " (" + items + videos + ")"
if "no-thumbnail" in scrapedthumbnail:
scrapedthumbnail = ""
else:
scrapedthumbnail += "|" + headers[0][0] + "=" + headers[0][1]
itemlist.append(item.clone(action="playlist", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, fanart=scrapedthumbnail))
return itemlist
def acciones_playlist(item):
logger.info()
itemlist = []
if item.title == "Crear una nueva playlist y añadir el documental":
from platformcode import platformtools
texto = platformtools.dialog_input(heading="Introduce el título de la nueva playlist")
if texto is not None:
post = "p=playlists&do=create-playlist&title=%s&visibility=1&video-id=%s&ui=video-watch" % (texto, item.id)
data = scrapertools.cachePage(item.url, headers=headers, post=post)
else:
return
elif item.title != "Me gusta":
if "Eliminar" in item.title:
action = "remove-from-playlist"
else:
action = "add-to-playlist"
post = "p=playlists&do=%s&playlist-id=%s&video-id=%s" % (action, item.list_id, item.id)
data = scrapertools.cachePage(item.url, headers=headers, post=post)
else:
item.url = "http://www.documaniatv.com/ajax.php?vid=%s&p=video&do=like" % item.id
data = scrapertools.cachePage(item.url, headers=headers)
try:
import xbmc
from platformcode import platformtools
platformtools.dialog_notification(item.title, "Se ha añadido/eliminado correctamente")
xbmc.executebuiltin("Container.Refresh")
except:
itemlist.append(item.clone(action="", title="Se ha añadido/eliminado correctamente"))
return itemlist
def playlist(item):
logger.info()
itemlist = []
data = scrapertools.cachePage(item.url, headers=headers)
patron = '<div class="pm-pl-list-index.*?src="([^"]+)".*?' \
'<a href="([^"]+)".*?>(.*?)</a>'
matches = scrapertools.find_multiple_matches(data, patron)
for scrapedthumbnail, scrapedurl, scrapedtitle in matches:
scrapedthumbnail += "|" + headers[0][0] + "=" + headers[0][1]
logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "], thumbnail=[" + scrapedthumbnail + "]")
itemlist.append(item.clone(action="play_", title=scrapedtitle, url=scrapedurl, thumbnail=scrapedthumbnail,
fanart=scrapedthumbnail, fulltitle=scrapedtitle, folder=False))
return itemlist

View File

@@ -0,0 +1,32 @@
{
"id": "documentalesonline",
"name": "Documentales Online",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "http://i.imgur.com/fsrnC4m.jpg",
"version": 1,
"changes": [
{
"date": "15/04/2017",
"description": "fix novedades"
},
{
"date": "09/03/2017",
"description": "nueva web"
}
],
"categories": [
"documentary"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": true,
"visible": true
}
]
}

View File

@@ -0,0 +1,155 @@
# -*- coding: utf-8 -*-
import re
from core import httptools
from core import logger
from core import scrapertools
from core.item import Item
HOST = "http://documentales-online.com/"
def mainlist(item):
logger.info()
itemlist = list()
itemlist.append(Item(channel=item.channel, title="Novedades", action="listado", url=HOST))
itemlist.append(Item(channel=item.channel, title="Destacados", action="seccion", url=HOST, extra="destacados"))
itemlist.append(Item(channel=item.channel, title="Series Destacadas", action="seccion", url=HOST, extra="series"))
# itemlist.append(Item(channel=item.channel, title="Top 100", action="categorias", url=HOST))
# itemlist.append(Item(channel=item.channel, title="Populares", action="categorias", url=HOST))
itemlist.append(Item(channel=item.channel, title="Buscar por:"))
itemlist.append(Item(channel=item.channel, title=" Título", action="search"))
itemlist.append(Item(channel=item.channel, title=" Categorías", action="categorias", url=HOST))
# itemlist.append(Item(channel=item.channel, title=" Series y Temas", action="categorias", url=HOST))
return itemlist
def seccion(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
if item.extra == "destacados":
patron_seccion = '<h4 class="widget-title">Destacados</h4><div class="textwidget"><ul>(.*?)</ul>'
action = "findvideos"
else:
patron_seccion = '<h4 class="widget-title">Series destacadas</h4><div class="textwidget"><ul>(.*?)</ul>'
action = "listado"
data = scrapertools.find_single_match(data, patron_seccion)
matches = re.compile('<a href="([^"]+)">(.*?)</a>', re.DOTALL).findall(data)
aux_action = action
for url, title in matches:
if item.extra != "destacados" and "Cosmos (Carl Sagan)" in title:
action = "findvideos"
else:
action = aux_action
itemlist.append(item.clone(title=title, url=url, action=action, fulltitle=title))
return itemlist
def listado(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
pagination = scrapertools.find_single_match(data, '<div class="older"><a href="([^"]+)"')
if not pagination:
pagination = scrapertools.find_single_match(data, '<span class=\'current\'>\d</span>'
'<a class="page larger" href="([^"]+)">')
patron = '<ul class="sp-grid">(.*?)</ul>'
data = scrapertools.find_single_match(data, patron)
matches = re.compile('<a href="([^"]+)">(.*?)</a>.*?<img.*?src="([^"]+)"', re.DOTALL).findall(data)
for url, title, thumb in matches:
itemlist.append(item.clone(title=title, url=url, action="findvideos", fulltitle=title, thumbnail=thumb))
if pagination:
itemlist.append(item.clone(title=">> Página siguiente", url=pagination))
return itemlist
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
data = scrapertools.find_single_match(data, 'a href="#">Categorías</a><ul class="sub-menu">(.*?)</ul>')
matches = re.compile('<a href="([^"]+)">(.*?)</a>', re.DOTALL).findall(data)
for url, title in matches:
itemlist.append(item.clone(title=title, url=url, action="listado", fulltitle=title))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
try:
item.url = HOST + "?s=%s" % texto
return listado(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("%s" % line)
return []
def findvideos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
data = re.sub(r"\n|\r|\t|\s{2}|-\s", "", data)
if item.fulltitle == "Cosmos (Carl Sagan)":
matches = scrapertools.find_multiple_matches(data,
'<p><strong>(.*?)</strong><br /><iframe.+?src="(https://www\.youtube\.com/[^?]+)')
for title, url in matches:
new_item = item.clone(title=title, url=url)
from core import servertools
aux_itemlist = servertools.find_video_items(new_item)
for videoitem in aux_itemlist:
videoitem.title = new_item.title
videoitem.fulltitle = new_item.title
videoitem.channel = item.channel
# videoitem.thumbnail = item.thumbnail
itemlist.extend(aux_itemlist)
else:
data = scrapertools.find_multiple_matches(data, '<iframe.+?src="(https://www\.youtube\.com/[^?]+)')
from core import servertools
itemlist.extend(servertools.find_video_items(data=",".join(data)))
for videoitem in itemlist:
videoitem.fulltitle = item.fulltitle
videoitem.channel = item.channel
# videoitem.thumbnail = item.thumbnail
return itemlist

77
kodi/channels/doomtv.json Executable file
View File

@@ -0,0 +1,77 @@
{
"id": "doomtv",
"name": "doomtv",
"compatible": {
"addon_version": "4.3"
},
"active": true,
"adult": false,
"language": "es",
"thumbnail": "https://s2.postimg.org/jivgi4ak9/doomtv.png",
"banner": "https://s32.postimg.org/6gxyripvp/doomtv_banner.png",
"version": 1,
"changes": [
{
"date": "24/06/2017",
"description": "Cambios para autoplay"
},
{
"date": "06/06/2017",
"description": "COmpatibilida con AutoPlay"
},
{
"date": "12/05/2017",
"description": "Fix generos y enlaces"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/02/2017",
"description": "Release."
}
],
"categories": [
"latino",
"movie"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostrar enlaces en idioma...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": [
"No filtrar",
"Latino"
]
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Peliculas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_infantiles",
"type": "bool",
"label": "Incluir en Novedades - Infantiles",
"default": true,
"enabled": true,
"visible": true
}
]
}

412
kodi/channels/doomtv.py Executable file
View File

@@ -0,0 +1,412 @@
# -*- coding: utf-8 -*-
import re
import urllib
import urlparse
from channels import autoplay
from channels import filtertools
from core import config
from core import httptools
from core import logger
from core import scrapertools
from core import servertools
from core import tmdb
from core.item import Item
IDIOMAS = {'latino': 'Latino'}
list_language = IDIOMAS.values()
CALIDADES = {'1080p': '1080p', '720p': '720p', '480p': '480p', '360p': '360p'}
list_quality = CALIDADES.values()
list_servers = ['directo']
host = 'http://doomtv.net/'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0 Chrome/58.0.3029.110',
'Referer': host}
tgenero = {"Comedia": "https://s7.postimg.org/ne9g9zgwb/comedia.png",
"Suspenso": "https://s13.postimg.org/wmw6vl1cn/suspenso.png",
"Drama": "https://s16.postimg.org/94sia332d/drama.png",
"Acción": "https://s3.postimg.org/y6o9puflv/accion.png",
"Aventura": "https://s10.postimg.org/6su40czih/aventura.png",
"Romance": "https://s15.postimg.org/fb5j8cl63/romance.png",
"Animación": "https://s13.postimg.org/5on877l87/animacion.png",
"Ciencia Ficción": "https://s9.postimg.org/diu70s7j3/cienciaficcion.png",
"Terror": "https://s7.postimg.org/yi0gij3gb/terror.png",
"Documentales": "https://s16.postimg.org/7xjj4bmol/documental.png",
"Musical": "https://s29.postimg.org/bbxmdh9c7/musical.png",
"Fantasía": "https://s13.postimg.org/65ylohgvb/fantasia.png",
"Bélico Guerra": "https://s23.postimg.org/71itp9hcr/belica.png",
"Misterio": "https://s1.postimg.org/w7fdgf2vj/misterio.png",
"Crimen": "https://s4.postimg.org/6z27zhirx/crimen.png",
"Biográfia": "https://s15.postimg.org/5lrpbx323/biografia.png",
"Familia": "https://s7.postimg.org/6s7vdhqrf/familiar.png",
"Familiar": "https://s7.postimg.org/6s7vdhqrf/familiar.png",
"Intriga": "https://s27.postimg.org/v9og43u2b/intriga.png",
"Thriller": "https://s22.postimg.org/5y9g0jsu9/thriller.png",
"Guerra": "https://s4.postimg.org/n1h2jp2jh/guerra.png",
"Estrenos": "https://s21.postimg.org/fy69wzm93/estrenos.png",
"Peleas": "https://s14.postimg.org/we1oyg05t/peleas.png",
"Policiales": "https://s21.postimg.org/n9e0ci31z/policial.png",
"Uncategorized": "https://s30.postimg.org/uj5tslenl/otros.png",
"LGBT": "https://s30.postimg.org/uj5tslenl/otros.png"}
def mainlist(item):
logger.info()
autoplay.init(item.channel, list_servers, list_quality)
itemlist = []
itemlist.append(
item.clone(title="Todas",
action="lista",
thumbnail='https://s18.postimg.org/fwvaeo6qh/todas.png',
fanart='https://s18.postimg.org/fwvaeo6qh/todas.png',
url=host
))
itemlist.append(
item.clone(title="Generos",
action="seccion",
thumbnail='https://s3.postimg.org/5s9jg2wtf/generos.png',
fanart='https://s3.postimg.org/5s9jg2wtf/generos.png',
url=host,
extra='generos'
))
itemlist.append(
item.clone(title="Mas vistas",
action="seccion",
thumbnail='https://s9.postimg.org/wmhzu9d7z/vistas.png',
fanart='https://s9.postimg.org/wmhzu9d7z/vistas.png',
url=host,
extra='masvistas'
))
itemlist.append(
item.clone(title="Recomendadas",
action="lista",
thumbnail='https://s12.postimg.org/s881laywd/recomendadas.png',
fanart='https://s12.postimg.org/s881laywd/recomendadas.png',
url=host,
extra='recomendadas'
))
itemlist.append(
item.clone(title="Por año",
action="seccion",
thumbnail='https://s8.postimg.org/7eoedwfg5/pora_o.png',
fanart='https://s8.postimg.org/7eoedwfg5/pora_o.png',
url=host, extra='poraño'
))
itemlist.append(
item.clone(title="Buscar",
action="search",
url='http://doomtv.net/?s=',
thumbnail='https://s30.postimg.org/pei7txpa9/buscar.png',
fanart='https://s30.postimg.org/pei7txpa9/buscar.png'
))
autoplay.show_option(item.channel, itemlist)
return itemlist
def lista(item):
logger.info()
itemlist = []
max_items = 20
next_page_url = ''
data = httptools.downloadpage(item.url).data
if item.extra == 'recomendadas':
patron = '<a href="(.*?)">.*?'
patron += '<div class="imgss">.*?'
patron += '<img src="(.*?)" alt="(.*?)(?:.*?|\(.*?|&#8211;|").*?'
patron += '<div class="imdb">.*?'
patron += '<\/a>.*?'
patron += '<span class="ttps">.*?<\/span>.*?'
patron += '<span class="ytps">(.*?)<\/span><\/div>'
elif item.extra in ['generos', 'poraño', 'buscar']:
patron = '<div class=movie>.*?<img src=(.*?) alt=(.*?)(?:\s|\/)><a href=(.*?)>.*?'
patron += '<h2>.*?<\/h2>.*?(?:<span class=year>(.*?)<\/span>)?.*?<\/div>'
else:
patron = '<div class="imagen">.*?'
patron += '<img src="(.*?)" alt="(.*?)(?:.*?|\(.*?|&#8211;|").*?'
patron += '<a href="([^"]+)"><(?:span) class="player"><\/span><\/a>.*?'
patron += 'h2>\s*.*?(?:year)">(.*?)<\/span>.*?<\/div>'
matches = re.compile(patron, re.DOTALL).findall(data)
if item.next_page != 'b':
if len(matches) > max_items:
next_page_url = item.url
matches = matches[:max_items]
next_page = 'b'
else:
matches = matches[max_items:]
next_page = 'a'
patron_next_page = '<div class="siguiente"><a href="(.*?)"|\/\?'
matches_next_page = re.compile(patron_next_page, re.DOTALL).findall(data)
if len(matches_next_page) > 0:
next_page_url = urlparse.urljoin(item.url, matches_next_page[0])
for scrapedthumbnail, scrapedtitle, scrapedurl, scrapedyear in matches:
if item.extra == 'recomendadas':
url = scrapedthumbnail
title = scrapedurl
thumbnail = scrapedtitle
else:
url = scrapedurl
thumbnail = scrapedthumbnail
title = scrapedtitle
year = scrapedyear
fanart = ''
plot = ''
if 'serie' not in url:
itemlist.append(
Item(channel=item.channel,
action='findvideos',
title=title,
url=url,
thumbnail=thumbnail,
plot=plot,
fanart=fanart,
contentTitle=title,
infoLabels={'year': year},
context=autoplay.context
))
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
# Paginacion
if next_page_url != '':
itemlist.append(
Item(channel=item.channel,
action="lista",
title='Siguiente >>>',
url=next_page_url,
thumbnail='https://s16.postimg.org/9okdu7hhx/siguiente.png',
extra=item.extra,
next_page=next_page
))
return itemlist
def seccion(item):
logger.info()
itemlist = []
duplicado = []
data = httptools.downloadpage(item.url).data
if item.extra == 'generos':
data = re.sub(r'"|\n|\r|\t|&nbsp;|<br>|\s{2,}', "", data)
accion = 'lista'
if item.extra == 'masvistas':
patron = '<b>\d*<\/b>\s*<a href="(.*?)">(.*?<\/a>\s*<span>.*?<\/span>\s*<i>.*?<\/i><\/li>)'
accion = 'findvideos'
elif item.extra == 'poraño':
patron = '<li><a class="ito" HREF="(.*?)">(.*?)<\/a><\/li>'
else:
patron = '<li class=cat-item cat-item-.*?><a href=(.*?)>(.*?)<\/i>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
url = scrapedurl
title = scrapedtitle
thumbnail = ''
fanart = ''
plot = ''
year = ''
contentTitle = ''
if item.extra == 'masvistas':
year = re.findall(r'\b\d{4}\b', scrapedtitle)
title = re.sub(r'<\/a>\s*<span>.*?<\/span>\s*<i>.*?<\/i><\/li>', '', scrapedtitle)
contentTitle = title
title = title + ' (' + year[0] + ')'
elif item.extra == 'generos':
title = re.sub(r'<\/a> <i>\d+', '', scrapedtitle)
cantidad = re.findall(r'.*?<\/a> <i>(\d+)', scrapedtitle)
th_title = title
title = title + ' (' + cantidad[0] + ')'
thumbnail = tgenero[th_title]
fanart = thumbnail
if url not in duplicado:
itemlist.append(
Item(channel=item.channel,
action=accion,
title=title,
url=url,
thumbnail=thumbnail,
plot=plot,
fanart=fanart,
contentTitle=contentTitle,
infoLabels={'year': year}
))
duplicado.append(url)
tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True)
return itemlist
def unpack(packed):
p, c, k = re.search("}\('(.*)', *\d+, *(\d+), *'(.*)'\.", packed, re.DOTALL).groups()
for c in reversed(range(int(c))):
if k.split('|')[c]: p = re.sub(r'(\b%s\b)' % c, k.split('|')[c], p)
p = p.replace('\\', '')
p = p.decode('string_escape')
return p
def getinfo(page_url):
info = ()
logger.info()
data = httptools.downloadpage(page_url).data
thumbnail = scrapertools.find_single_match(data, '<div class="cover" style="background-image: url\((.*?)\);')
plot = scrapertools.find_single_match(data, '<h2>Synopsis<\/h2>\s*<p>(.*?)<\/p>')
info = (plot, thumbnail)
return info
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = item.url + texto
if texto != '':
return lista(item)
def newest(categoria):
logger.info()
itemlist = []
item = Item()
# categoria='peliculas'
try:
if categoria == 'peliculas':
item.url = host
elif categoria == 'infantiles':
item.url = host + 'category/animacion/'
itemlist = lista(item)
if itemlist[-1].title == 'Siguiente >>>':
itemlist.pop()
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
return itemlist
def get_url(item):
logger.info()
itemlist = []
duplicado = []
patrones = ["{'label':(.*?),.*?'file':'(.*?)'}", "{file:'(.*?redirector.*?),label:'(.*?)'}"]
data = httptools.downloadpage(item.url, headers=headers, cookies=False).data
patron = 'class="player-content"><iframe src="(.*?)"'
matches = re.compile(patron, re.DOTALL).findall(data)
for option in matches:
if 'allplayer' in option:
url = 'http:/' + option.replace('//', '/')
data = httptools.downloadpage(url, headers=headers, cookies=False).data
packed = scrapertools.find_single_match(data, "<div id='allplayer'>.*?(eval\(function\(p,a,c,k.*?\)\)\))")
if packed:
unpacked = unpack(packed)
video_urls = []
if "vimeocdn" in unpacked:
streams = scrapertools.find_multiple_matches(unpacked,
"{file:'(.*?)',type:'video/.*?',label:'(.*?)'")
for video_url, quality in streams:
video_urls.append([video_url, quality])
else:
doc_id = scrapertools.find_single_match(unpacked, 'driveid=(.*?)&')
doc_url = "http://docs.google.com/get_video_info?docid=%s" % doc_id
response = httptools.downloadpage(doc_url, cookies=False)
cookies = ""
cookie = response.headers["set-cookie"].split("HttpOnly, ")
for c in cookie:
cookies += c.split(";", 1)[0] + "; "
data = response.data.decode('unicode-escape')
data = urllib.unquote_plus(urllib.unquote_plus(data))
headers_string = "|Cookie=" + cookies
url_streams = scrapertools.find_single_match(data, 'url_encoded_fmt_stream_map=(.*)')
streams = scrapertools.find_multiple_matches(url_streams,
'itag=(\d+)&url=(.*?)(?:;.*?quality=.*?(?:,|&)|&quality=.*?(?:,|&))')
itags = {'18': '360p', '22': '720p', '34': '360p', '35': '480p', '37': '1080p', '59': '480p'}
for itag, video_url in streams:
video_url += headers_string
video_urls.append([video_url, itags[itag]])
for video_item in video_urls:
calidad = video_item[1]
title = '%s [%s]' % (item.contentTitle, calidad)
url = video_item[0]
if url not in duplicado:
itemlist.append(
Item(channel=item.channel,
action='play',
title=title,
url=url,
thumbnail=item.thumbnail,
plot=item.plot,
fanart=item.fanart,
contentTitle=item.contentTitle,
language=IDIOMAS['latino'],
server='directo',
quality=CALIDADES[calidad],
context=item.context
))
duplicado.append(url)
else:
itemlist.extend(servertools.find_video_items(data=option))
for videoitem in itemlist:
if 'Enlace' in videoitem.title:
videoitem.channel = item.channel
videoitem.title = item.contentTitle + ' (' + videoitem.server + ')'
videoitem.language = 'latino'
videoitem.quality = 'default'
return itemlist
def findvideos(item):
logger.info()
itemlist = []
itemlist = get_url(item)
# Requerido para FilterTools
itemlist = filtertools.get_links(itemlist, item, list_language)
# Requerido para AutoPlay
autoplay.start(itemlist, item)
if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'findvideos':
itemlist.append(
Item(channel=item.channel,
title='[COLOR yellow]Añadir esta pelicula a la videoteca[/COLOR]',
url=item.url,
action="add_pelicula_to_library",
extra="findvideos",
contentTitle=item.contentTitle,
))
return itemlist

33
kodi/channels/doramastv.json Executable file
View File

@@ -0,0 +1,33 @@
{
"id": "doramastv",
"name": "DoramasTV",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "doramastv.png",
"banner": "doramastv.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"tvshow"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}

185
kodi/channels/doramastv.py Executable file
View File

@@ -0,0 +1,185 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import logger
from core import scrapertools
from core.item import Item
host = "http://doramastv.com/"
DEFAULT_HEADERS = []
DEFAULT_HEADERS.append(
["User-Agent", "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; es-ES; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12"])
def mainlist(item):
logger.info()
itemlist = list([])
itemlist.append(
Item(channel=item.channel, action="pagina_", title="En emision", url=urlparse.urljoin(host, "drama/emision")))
itemlist.append(Item(channel=item.channel, action="letras", title="Listado alfabetico",
url=urlparse.urljoin(host, "lista-numeros")))
itemlist.append(
Item(channel=item.channel, action="generos", title="Generos", url=urlparse.urljoin(host, "genero/accion")))
itemlist.append(Item(channel=item.channel, action="pagina_", title="Ultimos agregados",
url=urlparse.urljoin(host, "dramas/ultimos")))
itemlist.append(Item(channel=item.channel, action="search", title="Buscar",
url=urlparse.urljoin(host, "buscar/anime/ajax/?title=")))
return itemlist
def letras(item):
logger.info()
itemlist = []
headers = DEFAULT_HEADERS[:]
data = scrapertools.cache_page(item.url, headers=headers)
patron = ' <a href="(\/lista-.+?)">(.+?)<'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
title = scrapertools.entityunescape(scrapedtitle)
url = urlparse.urljoin(host, scrapedurl)
thumbnail = ""
plot = ""
logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(title, url, thumbnail))
itemlist.append(
Item(channel=item.channel, action="pagina_", title=title, url=url, thumbnail=thumbnail, plot=plot))
return itemlist
def pagina_(item):
logger.info()
itemlist = []
headers = DEFAULT_HEADERS[:]
data = scrapertools.cache_page(item.url, headers=headers)
data1 = scrapertools.get_match(data, '<div class="animes-bot">(.+?)<!-- fin -->')
data1 = data1.replace('\n', '')
data1 = data1.replace('\r', '')
patron = 'href="(\/drama.+?)".+?<\/div>(.+?)<\/div>.+?src="(.+?)".+?titulo">(.+?)<'
matches = re.compile(patron, re.DOTALL).findall(data1)
for scrapedurl, scrapedplot, scrapedthumbnail, scrapedtitle in matches:
title = scrapertools.unescape(scrapedtitle).strip()
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = urlparse.urljoin(host, scrapedthumbnail)
plot = scrapertools.decodeHtmlentities(scrapedplot)
itemlist.append(
Item(channel=item.channel, action="episodios", title=title, url=url, thumbnail=thumbnail, plot=plot,
show=title))
patron = 'href="([^"]+)" class="next"'
matches = re.compile(patron, re.DOTALL).findall(data)
for match in matches:
if len(matches) > 0:
scrapedurl = urlparse.urljoin(item.url, match)
scrapedtitle = "Pagina Siguiente >>"
scrapedthumbnail = ""
scrapedplot = ""
itemlist.append(Item(channel=item.channel, action="pagina_", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, folder=True))
return itemlist
def episodios(item):
logger.info()
itemlist = []
headers = DEFAULT_HEADERS[:]
data = scrapertools.cache_page(item.url, headers=headers)
data = data.replace('\n', '')
data = data.replace('\r', '')
data1 = scrapertools.get_match(data, '<ul id="lcholder">(.+?)</ul>')
patron = '<a href="(.+?)".+?>(.+?)<'
matches = re.compile(patron, re.DOTALL).findall(data1)
for scrapedurl, scrapedtitle in matches:
title = scrapertools.htmlclean(scrapedtitle).strip()
thumbnail = ""
plot = ""
url = urlparse.urljoin(item.url, scrapedurl)
show = item.show
itemlist.append(
Item(channel=item.channel, action="findvideos", title=title, url=url, thumbnail=thumbnail, plot=plot,
fulltitle=title, show=show))
return itemlist
def findvideos(item):
logger.info()
headers = DEFAULT_HEADERS[:]
data = scrapertools.cache_page(item.url, headers=headers)
data = data.replace('\n', '')
data = data.replace('\r', '')
patron = '<iframe src="(.+?)"'
matches = re.compile(patron, re.DOTALL).findall(data)
data1 = ''
for match in matches:
data1 += match + '\n'
data = data1
data = data.replace('%26', '&')
data = data.replace('http://ozhe.larata.in/repro-d/mvk?v=', 'http://vk.com/video_ext.php?oid=')
data = data.replace('http://ozhe.larata.in/repro-d/send?v=', 'http://sendvid.com/embed/')
data = data.replace('http://ozhe.larata.in/repro-d/msend?v=', 'http://sendvid.com/embed/')
data = data.replace('http://ozhe.larata.in/repro-d/vidweed?v=', 'http://www.videoweed.es/file/')
data = data.replace('http://ozhe.larata.in/repro-d/nowv?v=', 'http://www.nowvideo.sx/video/')
data = data.replace('http://ozhe.larata.in/repro-d/nov?v=', 'http://www.novamov.com/video/')
itemlist = []
from core import servertools
itemlist.extend(servertools.find_video_items(data=data))
for videoitem in itemlist:
videoitem.channel = item.channel
videoitem.folder = False
return itemlist
def generos(item):
logger.info()
itemlist = []
headers = DEFAULT_HEADERS[:]
data = scrapertools.cache_page(item.url, headers=headers)
data = data.replace('\n', '')
data = data.replace('\r', '')
data = scrapertools.get_match(data, '<!-- Lista de Generos -->(.+?)<\/div>')
patron = '<a href="(.+?)".+?>(.+?)<'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle in matches:
title = scrapertools.entityunescape(scrapedtitle)
url = urlparse.urljoin(host, scrapedurl)
thumbnail = ""
plot = ""
logger.debug("title=[{0}], url=[{1}], thumbnail=[{2}]".format(title, url, thumbnail))
itemlist.append(
Item(channel=item.channel, action="pagina_", title=title, url=url, thumbnail=thumbnail, plot=plot))
return itemlist
def search(item, texto):
logger.info()
item.url = urlparse.urljoin(host, item.url)
texto = texto.replace(" ", "+")
headers = DEFAULT_HEADERS[:]
data = scrapertools.cache_page(item.url + texto, headers=headers)
data = data.replace('\n', '')
data = data.replace('\r', '')
patron = '<a href="(.+?)".+?src="(.+?)".+?titulo">(.+?)<'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
title = scrapertools.unescape(scrapedtitle).strip()
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = urlparse.urljoin(host, scrapedthumbnail)
itemlist.append(
Item(channel=item.channel, action="episodios", title=title, url=url, thumbnail=thumbnail, plot="",
show=title))
return itemlist

189
kodi/channels/downloads.json Executable file
View File

@@ -0,0 +1,189 @@
{
"id": "downloads",
"name": "Descargas",
"active": false,
"adult": false,
"language": "es",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "12/03/17",
"description": "Añadidas mas opciones de configuracion y corregidos fallos"
},
{
"date": "12/01/17",
"description": "release"
}
],
"categories": [
"movie"
],
"settings": [
{
"type": "label",
"label": "Ubicacion de archivos",
"enabled": true,
"visible": true
},
{
"id": "library_add",
"type": "bool",
"label": " - Añadir descargas completadas a la videoteca",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "library_move",
"type": "bool",
"label": " - Mover el archivo descargado a la videoteca",
"default": true,
"enabled": "eq(-1,true)",
"visible": true
},
{
"id": "browser",
"type": "bool",
"label": " - Visualizar archivos descargados desde descargas",
"default": false,
"enabled": true,
"visible": true
},
{
"type": "label",
"label": "Descarga",
"enabled": true,
"visible": true
},
{
"id": "block_size",
"type": "list",
"label": " - Tamaño por bloque",
"lvalues": [
"128 KB",
"256 KB",
"512 KB",
"1 MB",
"2 MB"
],
"default": 1,
"enabled": true,
"visible": true
},
{
"id": "part_size",
"type": "list",
"label": " - Tamaño por parte",
"lvalues": [
"1 MB",
"2 MB",
"4 MB",
"8 MB",
"16 MB",
"32 MB"
],
"default": 1,
"enabled": true,
"visible": true
},
{
"id": "max_connections",
"type": "list",
"label": " - Numero máximo de conexiones simultaneas",
"lvalues": [
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"10"
],
"default": 4,
"enabled": true,
"visible": true
},
{
"id": "max_buffer",
"type": "list",
"label": " - Numero máximo de partes en memoria",
"lvalues": [
"0",
"2",
"4",
"6",
"8",
"10",
"12",
"14",
"16",
"18",
"20"
],
"default": 5,
"enabled": true,
"visible": true
},
{
"type": "label",
"label": "Elección del servidor",
"enabled": true,
"visible": true
},
{
"id": "server_reorder",
"type": "list",
"label": " - Orden de servidores",
"lvalues": [
"Mantener",
"Reordenar"
],
"default": 1,
"enabled": true,
"visible": true
},
{
"id": "language",
"type": "list",
"label": " - Idioma preferido",
"lvalues": [
"Esp, Lat, Sub, Eng, Vose",
"Esp, Sub, Lat, Eng, Vose",
"Eng, Sub, Vose, Esp, Lat",
"Vose, Eng, Sub, Esp, Lat"
],
"default": 0,
"enabled": "eq(-1,'Reordenar')",
"visible": true
},
{
"id": "quality",
"type": "list",
"label": " - Calidad preferida",
"lvalues": [
"La mas alta",
"HD 1080",
"HD 720",
"SD"
],
"default": 0,
"enabled": "eq(-2,'Reordenar')",
"visible": true
},
{
"id": "server_speed",
"type": "bool",
"label": " - Elegir los servidores mas rapidos",
"default": true,
"enabled": "eq(-3,'Reordenar')",
"visible": true
}
]
}

869
kodi/channels/downloads.py Executable file
View File

@@ -0,0 +1,869 @@
# -*- coding: utf-8 -*-
# ------------------------------------------------------------
# Gestor de descargas
# ------------------------------------------------------------
import os
import re
import time
from core import config
from core import filetools
from core import videolibrarytools
from core import logger
from core import scraper
from core import scrapertools
from core import servertools
from core.downloader import Downloader
from core.item import Item
from platformcode import platformtools
STATUS_COLORS = {0: "orange", 1: "orange", 2: "green", 3: "red"}
STATUS_CODES = type("StatusCode", (), {"stoped": 0, "canceled": 1, "completed": 2, "error": 3})
DOWNLOAD_LIST_PATH = config.get_setting("downloadlistpath")
DOWNLOAD_PATH = config.get_setting("downloadpath")
STATS_FILE = os.path.join(config.get_data_path(), "servers.json")
TITLE_FILE = "[COLOR %s][%i%%][/COLOR] %s"
TITLE_TVSHOW = "[COLOR %s][%i%%][/COLOR] %s [%s]"
def mainlist(item):
logger.info()
itemlist = []
# Lista de archivos
for file in sorted(filetools.listdir(DOWNLOAD_LIST_PATH)):
# Saltamos todos los que no sean JSON
if not file.endswith(".json"): continue
# cargamos el item
file = os.path.join(DOWNLOAD_LIST_PATH, file)
i = Item(path=file).fromjson(filetools.read(file))
i.thumbnail = i.contentThumbnail
# Listado principal
if not item.contentType == "tvshow":
# Series
if i.contentType == "episode":
# Comprobamos que la serie no este ya en el itemlist
if not filter(
lambda x: x.contentSerieName == i.contentSerieName and x.contentChannel == i.contentChannel,
itemlist):
title = TITLE_TVSHOW % (
STATUS_COLORS[i.downloadStatus], i.downloadProgress, i.contentSerieName, i.contentChannel)
itemlist.append(Item(title=title, channel="descargas", action="mainlist", contentType="tvshow",
contentSerieName=i.contentSerieName, contentChannel=i.contentChannel,
downloadStatus=i.downloadStatus, downloadProgress=[i.downloadProgress],
fanart=i.fanart, thumbnail=i.thumbnail))
else:
s = \
filter(lambda x: x.contentSerieName == i.contentSerieName and x.contentChannel == i.contentChannel,
itemlist)[0]
s.downloadProgress.append(i.downloadProgress)
downloadProgress = sum(s.downloadProgress) / len(s.downloadProgress)
if not s.downloadStatus in [STATUS_CODES.error, STATUS_CODES.canceled] and not i.downloadStatus in [
STATUS_CODES.completed, STATUS_CODES.stoped]:
s.downloadStatus = i.downloadStatus
s.title = TITLE_TVSHOW % (
STATUS_COLORS[s.downloadStatus], downloadProgress, i.contentSerieName, i.contentChannel)
# Peliculas
elif i.contentType == "movie" or i.contentType == "video":
i.title = TITLE_FILE % (STATUS_COLORS[i.downloadStatus], i.downloadProgress, i.contentTitle)
itemlist.append(i)
# Listado dentro de una serie
else:
if i.contentType == "episode" and i.contentSerieName == item.contentSerieName and i.contentChannel == item.contentChannel:
i.title = TITLE_FILE % (STATUS_COLORS[i.downloadStatus], i.downloadProgress,
"%dx%0.2d: %s" % (i.contentSeason, i.contentEpisodeNumber, i.contentTitle))
itemlist.append(i)
estados = [i.downloadStatus for i in itemlist]
# Si hay alguno completado
if 2 in estados:
itemlist.insert(0, Item(channel=item.channel, action="clean_ready", title="Eliminar descargas completadas",
contentType=item.contentType, contentChannel=item.contentChannel,
contentSerieName=item.contentSerieName, text_color="sandybrown"))
# Si hay alguno con error
if 3 in estados:
itemlist.insert(0, Item(channel=item.channel, action="restart_error", title="Reiniciar descargas con error",
contentType=item.contentType, contentChannel=item.contentChannel,
contentSerieName=item.contentSerieName, text_color="orange"))
# Si hay alguno pendiente
if 1 in estados or 0 in estados:
itemlist.insert(0, Item(channel=item.channel, action="download_all", title="Descargar todo",
contentType=item.contentType, contentChannel=item.contentChannel,
contentSerieName=item.contentSerieName, text_color="green"))
if len(itemlist):
itemlist.insert(0, Item(channel=item.channel, action="clean_all", title="Eliminar todo",
contentType=item.contentType, contentChannel=item.contentChannel,
contentSerieName=item.contentSerieName, text_color="red"))
if not item.contentType == "tvshow" and config.get_setting("browser", "downloads") == True:
itemlist.insert(0, Item(channel=item.channel, action="browser", title="Ver archivos descargados",
url=DOWNLOAD_PATH, text_color="yellow"))
if not item.contentType == "tvshow":
itemlist.insert(0, Item(channel=item.channel, action="settings", title="Configuración descargas...",
text_color="blue"))
return itemlist
def settings(item):
ret = platformtools.show_channel_settings(caption="configuración -- Descargas")
platformtools.itemlist_refresh()
return ret
def browser(item):
logger.info()
itemlist = []
for file in filetools.listdir(item.url):
if file == "list": continue
if filetools.isdir(filetools.join(item.url, file)):
itemlist.append(
Item(channel=item.channel, title=file, action=item.action, url=filetools.join(item.url, file)))
else:
itemlist.append(Item(channel=item.channel, title=file, action="play", url=filetools.join(item.url, file)))
return itemlist
def clean_all(item):
logger.info()
for fichero in sorted(filetools.listdir(DOWNLOAD_LIST_PATH)):
if fichero.endswith(".json"):
download_item = Item().fromjson(filetools.read(os.path.join(DOWNLOAD_LIST_PATH, fichero)))
if not item.contentType == "tvshow" or (
item.contentSerieName == download_item.contentSerieName and item.contentChannel == download_item.contentChannel):
filetools.remove(os.path.join(DOWNLOAD_LIST_PATH, fichero))
platformtools.itemlist_refresh()
def clean_ready(item):
logger.info()
for fichero in sorted(filetools.listdir(DOWNLOAD_LIST_PATH)):
if fichero.endswith(".json"):
download_item = Item().fromjson(filetools.read(os.path.join(DOWNLOAD_LIST_PATH, fichero)))
if not item.contentType == "tvshow" or (
item.contentSerieName == download_item.contentSerieName and item.contentChannel == download_item.contentChannel):
if download_item.downloadStatus == STATUS_CODES.completed:
filetools.remove(os.path.join(DOWNLOAD_LIST_PATH, fichero))
platformtools.itemlist_refresh()
def restart_error(item):
logger.info()
for fichero in sorted(filetools.listdir(DOWNLOAD_LIST_PATH)):
if fichero.endswith(".json"):
download_item = Item().fromjson(filetools.read(os.path.join(DOWNLOAD_LIST_PATH, fichero)))
if not item.contentType == "tvshow" or (
item.contentSerieName == download_item.contentSerieName and item.contentChannel == download_item.contentChannel):
if download_item.downloadStatus == STATUS_CODES.error:
if filetools.isfile(
os.path.join(config.get_setting("downloadpath"), download_item.downloadFilename)):
filetools.remove(
os.path.join(config.get_setting("downloadpath"), download_item.downloadFilename))
update_json(item.path,
{"downloadStatus": STATUS_CODES.stoped, "downloadComplete": 0, "downloadProgress": 0})
platformtools.itemlist_refresh()
def download_all(item):
time.sleep(0.5)
for fichero in sorted(filetools.listdir(DOWNLOAD_LIST_PATH)):
if fichero.endswith(".json"):
download_item = Item(path=os.path.join(DOWNLOAD_LIST_PATH, fichero)).fromjson(
filetools.read(os.path.join(DOWNLOAD_LIST_PATH, fichero)))
if not item.contentType == "tvshow" or (
item.contentSerieName == download_item.contentSerieName and item.contentChannel == download_item.contentChannel):
if download_item.downloadStatus in [STATUS_CODES.stoped, STATUS_CODES.canceled]:
res = start_download(download_item)
platformtools.itemlist_refresh()
# Si se ha cancelado paramos
if res == STATUS_CODES.canceled: break
def menu(item):
logger.info()
if item.downloadServer:
servidor = item.downloadServer.get("server", "Auto")
else:
servidor = "Auto"
# Opciones disponibles para el menu
op = ["Descargar", "Eliminar de la lista", "Reiniciar descarga y eliminar datos",
"Modificar servidor: %s" % (servidor.capitalize())]
opciones = []
# Opciones para el menu
if item.downloadStatus == 0: # Sin descargar
opciones.append(op[0]) # Descargar
if not item.server: opciones.append(op[3]) # Elegir Servidor
opciones.append(op[1]) # Eliminar de la lista
if item.downloadStatus == 1: # descarga parcial
opciones.append(op[0]) # Descargar
if not item.server: opciones.append(op[3]) # Elegir Servidor
opciones.append(op[2]) # Reiniciar descarga
opciones.append(op[1]) # Eliminar de la lista
if item.downloadStatus == 2: # descarga completada
opciones.append(op[1]) # Eliminar de la lista
opciones.append(op[2]) # Reiniciar descarga
if item.downloadStatus == 3: # descarga con error
opciones.append(op[2]) # Reiniciar descarga
opciones.append(op[1]) # Eliminar de la lista
# Mostramos el dialogo
seleccion = platformtools.dialog_select("Elige una opción", opciones)
# -1 es cancelar
if seleccion == -1: return
logger.info("opcion=%s" % (opciones[seleccion]))
# Opcion Eliminar
if opciones[seleccion] == op[1]:
filetools.remove(item.path)
# Opcion inicaiar descarga
if opciones[seleccion] == op[0]:
start_download(item)
# Elegir Servidor
if opciones[seleccion] == op[3]:
select_server(item)
# Reiniciar descarga
if opciones[seleccion] == op[2]:
if filetools.isfile(os.path.join(config.get_setting("downloadpath"), item.downloadFilename)):
filetools.remove(os.path.join(config.get_setting("downloadpath"), item.downloadFilename))
update_json(item.path, {"downloadStatus": STATUS_CODES.stoped, "downloadComplete": 0, "downloadProgress": 0,
"downloadServer": {}})
platformtools.itemlist_refresh()
def move_to_libray(item):
download_path = filetools.join(config.get_setting("downloadpath"), item.downloadFilename)
library_path = filetools.join(config.get_videolibrary_path(), *filetools.split(item.downloadFilename))
final_path = download_path
if config.get_setting("library_add", "downloads") == True and config.get_setting("library_move",
"downloads") == True:
if not filetools.isdir(filetools.dirname(library_path)):
filetools.mkdir(filetools.dirname(library_path))
if filetools.isfile(library_path) and filetools.isfile(download_path):
filetools.remove(library_path)
if filetools.isfile(download_path):
if filetools.move(download_path, library_path):
final_path = library_path
if len(filetools.listdir(filetools.dirname(download_path))) == 0:
filetools.rmdir(filetools.dirname(download_path))
if config.get_setting("library_add", "downloads") == True:
if filetools.isfile(final_path):
if item.contentType == "movie" and item.infoLabels["tmdb_id"]:
library_item = Item(title="Descargado: %s" % item.downloadFilename, channel="downloads",
action="findvideos", infoLabels=item.infoLabels, url=final_path)
videolibrarytools.save_movie(library_item)
elif item.contentType == "episode" and item.infoLabels["tmdb_id"]:
library_item = Item(title="Descargado: %s" % item.downloadFilename, channel="downloads",
action="findvideos", infoLabels=item.infoLabels, url=final_path)
tvshow = Item(channel="downloads", contentType="tvshow",
infoLabels={"tmdb_id": item.infoLabels["tmdb_id"]})
videolibrarytools.save_tvshow(tvshow, [library_item])
def update_json(path, params):
item = Item().fromjson(filetools.read(path))
item.__dict__.update(params)
filetools.write(path, item.tojson())
def save_server_statistics(server, speed, success):
from core import jsontools
if os.path.isfile(STATS_FILE):
servers = jsontools.load(open(STATS_FILE, "rb").read())
else:
servers = {}
if not server in servers:
servers[server] = {"success": [], "count": 0, "speeds": [], "last": 0}
servers[server]["count"] += 1
servers[server]["success"].append(bool(success))
servers[server]["success"] = servers[server]["success"][-5:]
servers[server]["last"] = time.time()
if success:
servers[server]["speeds"].append(speed)
servers[server]["speeds"] = servers[server]["speeds"][-5:]
open(STATS_FILE, "wb").write(jsontools.dump(servers))
return
def get_server_position(server):
from core import jsontools
if os.path.isfile(STATS_FILE):
servers = jsontools.load(open(STATS_FILE, "rb").read())
else:
servers = {}
if server in servers:
pos = [s for s in sorted(servers, key=lambda x: (sum(servers[x]["speeds"]) / (len(servers[x]["speeds"]) or 1),
float(sum(servers[x]["success"])) / (
len(servers[x]["success"]) or 1)), reverse=True)]
return pos.index(server) + 1
else:
return 0
def get_match_list(data, match_list, order_list=None, only_ascii=False, ignorecase=False):
"""
Busca coincidencias en una cadena de texto, con un diccionario de "ID" / "Listado de cadenas de busqueda":
{ "ID1" : ["Cadena 1", "Cadena 2", "Cadena 3"],
"ID2" : ["Cadena 4", "Cadena 5", "Cadena 6"]
}
El diccionario no pude contener una misma cadena de busqueda en varías IDs.
La busqueda se realiza por orden de tamaño de cadena de busqueda (de mas larga a mas corta) si una cadena coincide,
se elimina de la cadena a buscar para las siguientes, para que no se detecten dos categorias si una cadena es parte de otra:
por ejemplo: "Idioma Español" y "Español" si la primera aparece en la cadena "Pablo sabe hablar el Idioma Español"
coincidira con "Idioma Español" pero no con "Español" ya que la coincidencia mas larga tiene prioridad.
"""
import unicodedata
match_dict = dict()
matches = []
# Pasamos la cadena a unicode
data = unicode(data, "utf8")
# Pasamos el diccionario a {"Cadena 1": "ID1", "Cadena 2", "ID1", "Cadena 4", "ID2"} y los pasamos a unicode
for key in match_list:
if order_list and not key in order_list:
raise Exception("key '%s' not in match_list" % key)
for value in match_list[key]:
if value in match_dict:
raise Exception("Duplicate word in list: '%s'" % value)
match_dict[unicode(value, "utf8")] = key
# Si ignorecase = True, lo pasamos todo a mayusculas
if ignorecase:
data = data.upper()
match_dict = dict((key.upper(), match_dict[key]) for key in match_dict)
# Si ascii = True, eliminamos todos los accentos y Ñ
if only_ascii:
data = ''.join((c for c in unicodedata.normalize('NFD', data) if unicodedata.category(c) != 'Mn'))
match_dict = dict((''.join((c for c in unicodedata.normalize('NFD', key) if unicodedata.category(c) != 'Mn')),
match_dict[key]) for key in match_dict)
# Ordenamos el listado de mayor tamaño a menor y buscamos.
for match in sorted(match_dict, key=lambda x: len(x), reverse=True):
s = data
for a in matches:
s = s.replace(a, "")
if match in s:
matches.append(match)
if matches:
if order_list:
return type("Mtch_list", (),
{"key": match_dict[matches[-1]], "index": order_list.index(match_dict[matches[-1]])})
else:
return type("Mtch_list", (), {"key": match_dict[matches[-1]], "index": None})
else:
if order_list:
return type("Mtch_list", (), {"key": None, "index": len(order_list)})
else:
return type("Mtch_list", (), {"key": None, "index": None})
def sort_method(item):
"""
Puntua cada item en funcion de varios parametros:
@type item: item
@param item: elemento que se va a valorar.
@return: puntuacion otenida
@rtype: int
"""
lang_orders = {}
lang_orders[0] = ["ES", "LAT", "SUB", "ENG", "VOSE"]
lang_orders[1] = ["ES", "SUB", "LAT", "ENG", "VOSE"]
lang_orders[2] = ["ENG", "SUB", "VOSE", "ESP", "LAT"]
lang_orders[3] = ["VOSE", "ENG", "SUB", "ESP", "LAT"]
quality_orders = {}
quality_orders[0] = ["BLURAY", "FULLHD", "HD", "480P", "360P", "240P"]
quality_orders[1] = ["FULLHD", "HD", "480P", "360P", "240P", "BLURAY"]
quality_orders[2] = ["HD", "480P", "360P", "240P", "FULLHD", "BLURAY"]
quality_orders[3] = ["480P", "360P", "240P", "BLURAY", "FULLHD", "HD"]
order_list_idiomas = lang_orders[int(config.get_setting("language", "downloads"))]
match_list_idimas = {"ES": ["CAST", "ESP", "Castellano", "Español", "Audio Español"],
"LAT": ["LAT", "Latino"],
"SUB": ["Subtitulo Español", "Subtitulado", "SUB"],
"ENG": ["EN", "ENG", "Inglés", "Ingles", "English"],
"VOSE": ["VOSE"]}
order_list_calidad = ["BLURAY", "FULLHD", "HD", "480P", "360P", "240P"]
order_list_calidad = quality_orders[int(config.get_setting("quality", "downloads"))]
match_list_calidad = {"BLURAY": ["BR", "BLURAY"],
"FULLHD": ["FULLHD", "FULL HD", "1080", "HD1080", "HD 1080"],
"HD": ["HD", "HD REAL", "HD 720", "720", "HDTV"],
"480P": ["SD", "480P"],
"360P": ["360P"],
"240P": ["240P"]}
value = (get_match_list(item.title, match_list_idimas, order_list_idiomas, ignorecase=True, only_ascii=True).index, \
get_match_list(item.title, match_list_calidad, order_list_calidad, ignorecase=True, only_ascii=True).index)
if config.get_setting("server_speed", "downloads"):
value += tuple([get_server_position(item.server)])
return value
def download_from_url(url, item):
logger.info("Intentando descargar: %s" % (url))
if url.lower().endswith(".m3u8") or url.lower().startswith("rtmp"):
save_server_statistics(item.server, 0, False)
return {"downloadStatus": STATUS_CODES.error}
# Obtenemos la ruta de descarga y el nombre del archivo
download_path = filetools.dirname(filetools.join(DOWNLOAD_PATH, item.downloadFilename))
file_name = filetools.basename(filetools.join(DOWNLOAD_PATH, item.downloadFilename))
# Creamos la carpeta si no existe
if not filetools.exists(download_path):
filetools.mkdir(download_path)
# Lanzamos la descarga
d = Downloader(url, download_path, file_name,
max_connections=1 + int(config.get_setting("max_connections", "downloads")),
block_size=2 ** (17 + int(config.get_setting("block_size", "downloads"))),
part_size=2 ** (20 + int(config.get_setting("part_size", "downloads"))),
max_buffer=2 * int(config.get_setting("max_buffer", "downloads")))
d.start_dialog("Descargas")
# Descarga detenida. Obtenemos el estado:
# Se ha producido un error en la descarga
if d.state == d.states.error:
logger.info("Error al intentar descargar %s" % (url))
status = STATUS_CODES.error
# La descarga se ha detenifdo
elif d.state == d.states.stopped:
logger.info("Descarga detenida")
status = STATUS_CODES.canceled
# La descarga ha finalizado
elif d.state == d.states.completed:
logger.info("Descargado correctamente")
status = STATUS_CODES.completed
if item.downloadSize and item.downloadSize != d.size[0]:
status = STATUS_CODES.error
save_server_statistics(item.server, d.speed[0], d.state != d.states.error)
dir = os.path.dirname(item.downloadFilename)
file = filetools.join(dir, d.filename)
if status == STATUS_CODES.completed:
move_to_libray(item.clone(downloadFilename=file))
return {"downloadUrl": d.download_url, "downloadStatus": status, "downloadSize": d.size[0],
"downloadProgress": d.progress, "downloadCompleted": d.downloaded[0], "downloadFilename": file}
def download_from_server(item):
logger.info(item.tostring())
unsupported_servers = ["torrent"]
progreso = platformtools.dialog_progress("Descargas", "Probando con: %s" % item.server)
channel = __import__('channels.%s' % item.contentChannel, None, None, ["channels.%s" % item.contentChannel])
if hasattr(channel, "play") and not item.play_menu:
progreso.update(50, "Probando con: %s" % item.server, "Conectando con %s..." % item.contentChannel)
try:
itemlist = getattr(channel, "play")(item.clone(channel=item.contentChannel, action=item.contentAction))
except:
logger.error("Error en el canal %s" % item.contentChannel)
else:
if len(itemlist) and isinstance(itemlist[0], Item):
download_item = item.clone(**itemlist[0].__dict__)
download_item.contentAction = download_item.action
download_item.infoLabels = item.infoLabels
item = download_item
elif len(itemlist) and isinstance(itemlist[0], list):
item.video_urls = itemlist
if not item.server: item.server = "directo"
else:
logger.info("No hay nada que reproducir")
return {"downloadStatus": STATUS_CODES.error}
progreso.close()
logger.info("contentAction: %s | contentChannel: %s | server: %s | url: %s" % (
item.contentAction, item.contentChannel, item.server, item.url))
if not item.server or not item.url or not item.contentAction == "play" or item.server in unsupported_servers:
logger.error("El Item no contiene los parametros necesarios.")
return {"downloadStatus": STATUS_CODES.error}
if not item.video_urls:
video_urls, puedes, motivo = servertools.resolve_video_urls_for_playing(item.server, item.url, item.password,
True)
else:
video_urls, puedes, motivo = item.video_urls, True, ""
# Si no esta disponible, salimos
if not puedes:
logger.info("El vídeo **NO** está disponible")
return {"downloadStatus": STATUS_CODES.error}
else:
logger.info("El vídeo **SI** está disponible")
result = {}
# Recorre todas las opciones hasta que consiga descargar una correctamente
for video_url in reversed(video_urls):
result = download_from_url(video_url[1], item)
if result["downloadStatus"] in [STATUS_CODES.canceled, STATUS_CODES.completed]:
break
# Error en la descarga, continuamos con la siguiente opcion
if result["downloadStatus"] == STATUS_CODES.error:
continue
# Devolvemos el estado
return result
def download_from_best_server(item):
logger.info(
"contentAction: %s | contentChannel: %s | url: %s" % (item.contentAction, item.contentChannel, item.url))
result = {"downloadStatus": STATUS_CODES.error}
progreso = platformtools.dialog_progress("Descargas", "Obteniendo lista de servidores disponibles...")
channel = __import__('channels.%s' % item.contentChannel, None, None, ["channels.%s" % item.contentChannel])
progreso.update(50, "Obteniendo lista de servidores disponibles:", "Conectando con %s..." % item.contentChannel)
if hasattr(channel, item.contentAction):
play_items = getattr(channel, item.contentAction)(
item.clone(action=item.contentAction, channel=item.contentChannel))
else:
play_items = servertools.find_video_items(item.clone(action=item.contentAction, channel=item.contentChannel))
play_items = filter(lambda x: x.action == "play" and not "trailer" in x.title.lower(), play_items)
progreso.update(100, "Obteniendo lista de servidores disponibles", "Servidores disponibles: %s" % len(play_items),
"Identificando servidores...")
if config.get_setting("server_reorder", "downloads") == 1:
play_items.sort(key=sort_method)
if progreso.iscanceled():
return {"downloadStatus": STATUS_CODES.canceled}
progreso.close()
# Recorremos el listado de servers, hasta encontrar uno que funcione
for play_item in play_items:
play_item = item.clone(**play_item.__dict__)
play_item.contentAction = play_item.action
play_item.infoLabels = item.infoLabels
result = download_from_server(play_item)
if progreso.iscanceled():
result["downloadStatus"] = STATUS_CODES.canceled
# Tanto si se cancela la descarga como si se completa dejamos de probar mas opciones
if result["downloadStatus"] in [STATUS_CODES.canceled, STATUS_CODES.completed]:
result["downloadServer"] = {"url": play_item.url, "server": play_item.server}
break
return result
def select_server(item):
logger.info(
"contentAction: %s | contentChannel: %s | url: %s" % (item.contentAction, item.contentChannel, item.url))
progreso = platformtools.dialog_progress("Descargas", "Obteniendo lista de servidores disponibles...")
channel = __import__('channels.%s' % item.contentChannel, None, None, ["channels.%s" % item.contentChannel])
progreso.update(50, "Obteniendo lista de servidores disponibles:", "Conectando con %s..." % item.contentChannel)
if hasattr(channel, item.contentAction):
play_items = getattr(channel, item.contentAction)(
item.clone(action=item.contentAction, channel=item.contentChannel))
else:
play_items = servertools.find_video_items(item.clone(action=item.contentAction, channel=item.contentChannel))
play_items = filter(lambda x: x.action == "play" and not "trailer" in x.title.lower(), play_items)
progreso.update(100, "Obteniendo lista de servidores disponibles", "Servidores disponibles: %s" % len(play_items),
"Identificando servidores...")
for x, i in enumerate(play_items):
if not i.server and hasattr(channel, "play"):
play_items[x] = getattr(channel, "play")(i)
seleccion = platformtools.dialog_select("Selecciona el servidor", ["Auto"] + [s.title for s in play_items])
if seleccion > 1:
update_json(item.path, {
"downloadServer": {"url": play_items[seleccion - 1].url, "server": play_items[seleccion - 1].server}})
elif seleccion == 0:
update_json(item.path, {"downloadServer": {}})
platformtools.itemlist_refresh()
def start_download(item):
logger.info(
"contentAction: %s | contentChannel: %s | url: %s" % (item.contentAction, item.contentChannel, item.url))
# Ya tenemnos server, solo falta descargar
if item.contentAction == "play":
ret = download_from_server(item)
update_json(item.path, ret)
return ret["downloadStatus"]
elif item.downloadServer and item.downloadServer.get("server"):
ret = download_from_server(
item.clone(server=item.downloadServer.get("server"), url=item.downloadServer.get("url"),
contentAction="play"))
update_json(item.path, ret)
return ret["downloadStatus"]
# No tenemos server, necesitamos buscar el mejor
else:
ret = download_from_best_server(item)
update_json(item.path, ret)
return ret["downloadStatus"]
def get_episodes(item):
logger.info("contentAction: %s | contentChannel: %s | contentType: %s" % (
item.contentAction, item.contentChannel, item.contentType))
# El item que pretendemos descargar YA es un episodio
if item.contentType == "episode":
episodes = [item.clone()]
# El item es uma serie o temporada
elif item.contentType in ["tvshow", "season"]:
# importamos el canal
channel = __import__('channels.%s' % item.contentChannel, None, None, ["channels.%s" % item.contentChannel])
# Obtenemos el listado de episodios
episodes = getattr(channel, item.contentAction)(item)
itemlist = []
# Tenemos las lista, ahora vamos a comprobar
for episode in episodes:
# Si partiamos de un item que ya era episodio estos datos ya están bien, no hay que modificarlos
if item.contentType != "episode":
episode.contentAction = episode.action
episode.contentChannel = episode.channel
# Si el resultado es una temporada, no nos vale, tenemos que descargar los episodios de cada temporada
if episode.contentType == "season":
itemlist.extend(get_episodes(episode))
# Si el resultado es un episodio ya es lo que necesitamos, lo preparamos para añadirlo a la descarga
if episode.contentType == "episode":
# Pasamos el id al episodio
if not episode.infoLabels["tmdb_id"]:
episode.infoLabels["tmdb_id"] = item.infoLabels["tmdb_id"]
# Episodio, Temporada y Titulo
if not episode.contentSeason or not episode.contentEpisodeNumber:
season_and_episode = scrapertools.get_season_and_episode(episode.title)
if season_and_episode:
episode.contentSeason = season_and_episode.split("x")[0]
episode.contentEpisodeNumber = season_and_episode.split("x")[1]
# Buscamos en tmdb
if item.infoLabels["tmdb_id"]:
scraper.find_and_set_infoLabels(episode)
# Episodio, Temporada y Titulo
if not episode.contentTitle:
episode.contentTitle = re.sub("\[[^\]]+\]|\([^\)]+\)|\d*x\d*\s*-", "", episode.title).strip()
episode.downloadFilename = filetools.validate_path(os.path.join(item.downloadFilename, "%dx%0.2d - %s" % (
episode.contentSeason, episode.contentEpisodeNumber, episode.contentTitle.strip())))
itemlist.append(episode)
# Cualquier otro resultado no nos vale, lo ignoramos
else:
logger.info("Omitiendo item no válido: %s" % episode.tostring())
return itemlist
def write_json(item):
logger.info()
item.action = "menu"
item.channel = "downloads"
item.downloadStatus = STATUS_CODES.stoped
item.downloadProgress = 0
item.downloadSize = 0
item.downloadCompleted = 0
if not item.contentThumbnail:
item.contentThumbnail = item.thumbnail
for name in ["text_bold", "text_color", "text_italic", "context", "totalItems", "viewmode", "title", "fulltitle",
"thumbnail"]:
if item.__dict__.has_key(name):
item.__dict__.pop(name)
path = os.path.join(config.get_setting("downloadlistpath"), str(time.time()) + ".json")
filetools.write(path, item.tojson())
item.path = path
time.sleep(0.1)
def save_download(item):
logger.info()
# Menu contextual
if item.from_action and item.from_channel:
item.channel = item.from_channel
item.action = item.from_action
del item.from_action
del item.from_channel
item.contentChannel = item.channel
item.contentAction = item.action
if item.contentType in ["tvshow", "episode", "season"]:
save_download_tvshow(item)
elif item.contentType == "movie":
save_download_movie(item)
else:
save_download_video(item)
def save_download_video(item):
logger.info("contentAction: %s | contentChannel: %s | contentTitle: %s" % (
item.contentAction, item.contentChannel, item.contentTitle))
set_movie_title(item)
item.downloadFilename = filetools.validate_path("%s [%s]" % (item.contentTitle.strip(), item.contentChannel))
write_json(item)
if not platformtools.dialog_yesno(config.get_localized_string(30101), "¿Iniciar la descarga ahora?"):
platformtools.dialog_ok(config.get_localized_string(30101), item.contentTitle,
config.get_localized_string(30109))
else:
start_download(item)
def save_download_movie(item):
logger.info("contentAction: %s | contentChannel: %s | contentTitle: %s" % (
item.contentAction, item.contentChannel, item.contentTitle))
progreso = platformtools.dialog_progress("Descargas", "Obteniendo datos de la pelicula")
set_movie_title(item)
result = scraper.find_and_set_infoLabels(item)
if not result:
progreso.close()
return save_download_video(item)
progreso.update(0, "Añadiendo pelicula...")
item.downloadFilename = filetools.validate_path("%s [%s]" % (item.contentTitle.strip(), item.contentChannel))
write_json(item)
progreso.close()
if not platformtools.dialog_yesno(config.get_localized_string(30101), "¿Iniciar la descarga ahora?"):
platformtools.dialog_ok(config.get_localized_string(30101), item.contentTitle,
config.get_localized_string(30109))
else:
start_download(item)
def save_download_tvshow(item):
logger.info("contentAction: %s | contentChannel: %s | contentType: %s | contentSerieName: %s" % (
item.contentAction, item.contentChannel, item.contentType, item.contentSerieName))
progreso = platformtools.dialog_progress("Descargas", "Obteniendo datos de la serie")
scraper.find_and_set_infoLabels(item)
item.downloadFilename = filetools.validate_path("%s [%s]" % (item.contentSerieName, item.contentChannel))
progreso.update(0, "Obteniendo episodios...", "conectando con %s..." % item.contentChannel)
episodes = get_episodes(item)
progreso.update(0, "Añadiendo capitulos...", " ")
for x, i in enumerate(episodes):
progreso.update(x * 100 / len(episodes),
"%dx%0.2d: %s" % (i.contentSeason, i.contentEpisodeNumber, i.contentTitle))
write_json(i)
progreso.close()
if not platformtools.dialog_yesno(config.get_localized_string(30101), "¿Iniciar la descarga ahora?"):
platformtools.dialog_ok(config.get_localized_string(30101),
str(len(episodes)) + " capitulos de: " + item.contentSerieName,
config.get_localized_string(30109))
else:
for i in episodes:
res = start_download(i)
if res == STATUS_CODES.canceled:
break
def set_movie_title(item):
if not item.contentTitle:
item.contentTitle = re.sub("\[[^\]]+\]|\([^\)]+\)", "", item.fulltitle).strip()
if not item.contentTitle:
item.contentTitle = re.sub("\[[^\]]+\]|\([^\)]+\)", "", item.title).strip()

View File

@@ -0,0 +1,23 @@
{
"id": "ecarteleratrailers",
"name": "Trailers ecartelera",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "ecarteleratrailers.png",
"banner": "ecarteleratrailers.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"movie"
]
}

View File

@@ -0,0 +1,84 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import logger
from core import scrapertools
from core.item import Item
def mainlist(item):
logger.info()
itemlist = []
if item.url == "":
item.url = "http://www.ecartelera.com/videos/"
# ------------------------------------------------------
# Descarga la página
# ------------------------------------------------------
data = scrapertools.cachePage(item.url)
# logger.info(data)
# ------------------------------------------------------
# Extrae las películas
# ------------------------------------------------------
patron = '<div class="viditem"[^<]+'
patron += '<div class="fimg"><a href="([^"]+)"><img alt="([^"]+)" src="([^"]+)"/><p class="length">([^<]+)</p></a></div[^<]+'
patron += '<div class="fcnt"[^<]+'
patron += '<h4><a[^<]+</a></h4[^<]+'
patron += '<p class="desc">([^<]+)</p>'
matches = re.compile(patron, re.DOTALL).findall(data)
for scrapedurl, scrapedtitle, scrapedthumbnail, duration, scrapedplot in matches:
title = scrapedtitle + " (" + duration + ")"
url = scrapedurl
thumbnail = scrapedthumbnail
plot = scrapedplot.strip()
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="play", title=title, url=url, thumbnail=thumbnail, fanart=thumbnail,
plot=plot, server="directo", folder=False))
# ------------------------------------------------------
# Extrae la página siguiente
# ------------------------------------------------------
patron = '<a href="([^"]+)">Siguiente</a>'
matches = re.compile(patron, re.DOTALL).findall(data)
for match in matches:
scrapedtitle = "Pagina siguiente"
scrapedurl = match
scrapedthumbnail = ""
scrapeddescription = ""
# Añade al listado de XBMC
itemlist.append(Item(channel=item.channel, action="mainlist", title=scrapedtitle, url=scrapedurl,
thumbnail=scrapedthumbnail, plot=scrapedplot, server="directo", folder=True,
viewmode="movie_with_plot"))
return itemlist
# Reproducir un vídeo
def play(item):
logger.info()
itemlist = []
# Descarga la página
data = scrapertools.cachePage(item.url)
logger.info(data)
# Extrae las películas
patron = '<source src="([^"]+)"'
matches = re.compile(patron, re.DOTALL).findall(data)
if len(matches) > 0:
url = urlparse.urljoin(item.url, matches[0])
logger.info("url=" + url)
itemlist.append(Item(channel=item.channel, action="play", title=item.title, url=url, thumbnail=item.thumbnail,
plot=item.plot, server="directo", folder=False))
return itemlist

View File

@@ -0,0 +1,24 @@
{
"id": "elsenordelanillo",
"name": "El señor del anillo",
"active": true,
"adult": false,
"language": "es",
"thumbnail": "elsenordelanillo.png",
"banner": "elsenordelanillo.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "01/07/2016",
"description": "Eliminado código innecesario."
}
],
"categories": [
"latino",
"movie"
]
}

216
kodi/channels/elsenordelanillo.py Executable file
View File

@@ -0,0 +1,216 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import logger
from core import scrapertools
from core import servertools
from core.item import Item
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, action="peliculas", title="Novedades",
url="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/", viewmode="movie"))
itemlist.append(Item(channel=item.channel, action="generos", title="Por género",
url="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/"))
itemlist.append(Item(channel=item.channel, action="letras", title="Por letra",
url="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/"))
itemlist.append(Item(channel=item.channel, action="anyos", title="Por año",
url="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/"))
return itemlist
def anyos(item):
logger.info()
# Descarga la pagina
data = scrapertools.cache_page(item.url)
# logger.info("data="+data)
data = scrapertools.find_single_match(data, 'scalas por a(.*?)</ul>')
logger.info("data=" + data)
# Extrae las entradas (carpetas)
patron = '<li><a target="[^"]+" title="[^"]+" href="([^"]+)"><strong>([^<]+)</strong>'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for scrapedurl, scrapedtitle in matches:
title = scrapedtitle.strip()
thumbnail = ""
plot = ""
url = urlparse.urljoin(item.url, scrapedurl)
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="peliculas", title=title, url=url, thumbnail=thumbnail, plot=plot,
fulltitle=title, viewmode="movie"))
return itemlist
def letras(item):
logger.info()
# Descarga la pagina
data = scrapertools.cache_page(item.url)
# logger.info("data="+data)
data = scrapertools.find_single_match(data, '<div class="bkpelsalf_ul(.*?)</ul>')
logger.info("data=" + data)
# Extrae las entradas (carpetas)
# <li><a target="_top" href="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/letra/a.html" title="Películas que comienzan con A">A</a>
patron = '<li><a target="[^"]+" href="([^"]+)" title="[^"]+">([^<]+)</a>'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for scrapedurl, scrapedtitle in matches:
title = scrapedtitle.strip()
thumbnail = ""
plot = ""
url = urlparse.urljoin(item.url, scrapedurl)
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="peliculas", title=title, url=url, thumbnail=thumbnail, plot=plot,
fulltitle=title, viewmode="movie"))
return itemlist
def generos(item):
logger.info()
# Descarga la pagina
data = scrapertools.cache_page(item.url)
# logger.info("data="+data)
# Extrae las entradas (carpetas)
# <a class='generos' target="_top" href='/pelisdelanillo/categoria/accion/' title='Las Mejores Películas de Acción De Todos Los Años'> Acción </a>
patron = "<a class='generos' target=\"_top\" href='([^']+)' title='[^']+'>([^<]+)</a>"
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for scrapedurl, scrapedtitle in matches:
title = unicode(scrapedtitle, "iso-8859-1", errors="replace").encode("utf-8").strip()
thumbnail = ""
plot = ""
url = urlparse.urljoin(item.url, scrapedurl)
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="peliculas", title=title, url=url, thumbnail=thumbnail, plot=plot,
fulltitle=title, viewmode="movie"))
return itemlist
def peliculas(item):
logger.info()
itemlist = []
# Descarga la pagina
data = scrapertools.cache_page(item.url)
# logger.info("data="+data)
# Extrae las entradas
'''
<!--<pelicula>-->
<li class="peli_bx br1px brdr10px ico_a">
<h2 class="titpeli bold ico_b"><a target="_top" href="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/pelicula/1077/el-jardinero-fiel.html" title="El Jardinero Fiel">El Jardinero Fiel</a></h2>
<div class="peli_img p_relative">
<div class="peli_img_img">
<a target="_top" href="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/pelicula/1077/el-jardinero-fiel.html" title="El Jardinero Fiel">
<img src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/files/uploads/1077.jpg" alt="El Jardinero Fiel" /></a>
</div>
<div>
<center><table border="5" bordercolor="#000000"><tr><td>
<img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/lat.png">
</td><td>
<img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/sub.png">
</td><td>
<img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/no-cam.png">
</td><td>
<img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/dvd.png">
</td><td>
<img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/no-hd.png">
</td></tr></table></center>
</div>
<div class="peli_txt bgdeg8 brdr10px bxshd2 ico_b p_absolute pd15px white">
<div class="plt_tit bold fs14px mgbot10px"><h2 class="bold d_inline fs14px"><font color="black"><b>El Jardinero Fiel</b></font></h2></div>
<div class="plt_ft clf mgtop10px">
<div class="stars f_left pdtop10px"><strong>Genero</strong>: Suspenso, Drama, 2005</div>
<br><br>
<div class="stars f_left pdtop10px"><table><tr><td><strong>Idioma</strong>:</td><td><img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/lat.png"></td><td><img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/sub.png"></td></tr></table></div>
<br /><br />
<div class="stars f_left pdtop10px"><table><tr><td><strong>Calidad</strong>:</td><td><img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/no-cam.png"></td><td><img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/dvd.png"></td><td><img width="26" heigth="17" src="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/Temas/default/img/idioma/no-hd.png"></td></tr></table></div>
<br /><br>
<div class="stars f_left pdtop10px"><strong>Visualizada</strong>: 629 Veces</div>
<a target="_top" class="vrfich bold ico f_right" href="http://www.xn--elseordelanillo-1qb.com/pelisdelanillo/pelicula/1077/el-jardinero-fiel.html" title=""></a>
</div>
</div>
</div>
</li>
<!--</pelicula>-->
'''
patronbloque = "<!--<pelicula>--[^<]+<li(.*?)</li>"
bloques = re.compile(patronbloque, re.DOTALL).findall(data)
for bloque in bloques:
scrapedurl = scrapertools.find_single_match(bloque, '<a.*?href="([^"]+)"')
scrapedtitle = scrapertools.find_single_match(bloque, '<a.*?title="([^"]+)"')
scrapedthumbnail = scrapertools.find_single_match(bloque, '<img src="([^"]+)"')
title = unicode(scrapedtitle, "iso-8859-1", errors="replace").encode("utf-8")
title = title.strip()
title = scrapertools.htmlclean(title)
thumbnail = urlparse.urljoin(item.url, scrapedthumbnail)
plot = ""
url = urlparse.urljoin(item.url, scrapedurl)
logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + thumbnail + "]")
itemlist.append(
Item(channel=item.channel, action="findvideos", title=title, url=url, thumbnail=thumbnail, plot=plot,
fulltitle=title))
# </b></span></a></li[^<]+<li><a href="?page=2">
next_page = scrapertools.find_single_match(data, '</b></span></a></li[^<]+<li><a target="_top" href="([^"]+)">')
if next_page != "":
itemlist.append(
Item(channel=item.channel, action="peliculas", title=">> Página siguiente", url=item.url + next_page,
folder=True, viewmode="movie"))
return itemlist
def findvideos(item):
logger.info()
# Descarga la pagina
data = scrapertools.cache_page(item.url)
# logger.info("data="+data)
bloque = scrapertools.find_single_match(data, "function cargamos.*?window.open.'([^']+)'")
data = scrapertools.cache_page(bloque)
from core import servertools
itemlist = servertools.find_video_items(data=data)
for videoitem in itemlist:
videoitem.channel = item.channel
videoitem.folder = False
return itemlist
def play(item):
logger.info("url=" + item.url)
itemlist = servertools.find_video_items(data=item.url)
for videoitem in itemlist:
videoitem.title = item.title
videoitem.fulltitle = item.fulltitle
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
return itemlist

27
kodi/channels/eporner.json Executable file
View File

@@ -0,0 +1,27 @@
{
"id": "eporner",
"name": "Eporner",
"active": true,
"adult": true,
"language": "es",
"thumbnail": "eporner.png",
"banner": "eporner.png",
"version": 1,
"changes": [
{
"date": "03/06/2017",
"description": "reparada seccion categorias"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "28/12/16",
"description": "First version"
}
],
"categories": [
"adult"
]
}

145
kodi/channels/eporner.py Executable file
View File

@@ -0,0 +1,145 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import httptools
from core import jsontools
from core import logger
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(item.clone(title="Últimos videos", action="videos", url="http://www.eporner.com/0/"))
itemlist.append(item.clone(title="Categorias", action="categorias", url="http://www.eporner.com/categories/"))
itemlist.append(item.clone(title="Pornstars", action="pornstars_list", url="http://www.eporner.com/pornstars/"))
itemlist.append(item.clone(title="Buscar", action="search", url="http://www.eporner.com/search/%s/"))
return itemlist
def search(item, texto):
logger.info()
item.url = item.url % texto
item.action = "videos"
try:
return videos(item)
except:
import traceback
logger.error(traceback.format_exc())
return []
def pornstars_list(item):
logger.info()
itemlist = []
for letra in "ABCDEFGHIJKLMNOPQRSTUVWXYZ":
itemlist.append(item.clone(title=letra, url=urlparse.urljoin(item.url, letra), action="pornstars"))
return itemlist
def pornstars(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<div class="mbtit" itemprop="name"><a href="([^"]+)" title="([^"]+)">[^<]+</a></div> '
patron += '<a href="[^"]+" title="[^"]+"> <img src="([^"]+)" alt="[^"]+" style="width:190px;height:152px;" /> </a> '
patron += '<div class="mbtim"><span>Videos: </span>([^<]+)</div>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, count in matches:
itemlist.append(
item.clone(title="%s (%s videos)" % (title, count), url=urlparse.urljoin(item.url, url), action="videos",
thumbnail=thumbnail))
# Paginador
patron = "<span style='color:#FFCC00;'>[^<]+</span></a> <a href='([^']+)' title='[^']+'><span>[^<]+</span></a>"
matches = re.compile(patron, re.DOTALL).findall(data)
if matches:
itemlist.append(item.clone(title="Pagina siguiente", url=urlparse.urljoin(item.url, matches[0])))
return itemlist
def categorias(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<div class="categoriesbox" id="[^"]+"> <div class="ctbinner"> <a href="([^"]+)" title="[^"]+"> <img src="([^"]+)" alt="[^"]+"> <h2>([^"]+)</h2> </a> </div> </div>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, thumbnail, title in matches:
itemlist.append(
item.clone(title=title, url=urlparse.urljoin(item.url, url), action="videos", thumbnail=thumbnail))
return sorted(itemlist, key=lambda i: i.title)
def videos(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = '<a href="([^"]+)" title="([^"]+)" id="[^"]+">.*?<img id="[^"]+" src="([^"]+)"[^>]+>.*?<div class="mbtim">([^<]+)</div>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, title, thumbnail, duration in matches:
itemlist.append(item.clone(title="%s (%s)" % (title, duration), url=urlparse.urljoin(item.url, url),
action="play", thumbnail=thumbnail, contentThumbnail=thumbnail,
contentType="movie", contentTitle=title))
# Paginador
patron = "<span style='color:#FFCC00;'>[^<]+</span></a> <a href='([^']+)' title='[^']+'><span>[^<]+</span></a>"
matches = re.compile(patron, re.DOTALL).findall(data)
if matches:
itemlist.append(item.clone(title="Página siguiente", url=urlparse.urljoin(item.url, matches[0])))
return itemlist
def int_to_base36(num):
"""Converts a positive integer into a base36 string."""
assert num >= 0
digits = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'.lower()
res = ''
while not res or num > 0:
num, i = divmod(num, 36)
res = digits[i] + res
return res
def play(item):
logger.info()
itemlist = []
data = httptools.downloadpage(item.url).data
patron = "EP: { vid: '([^']+)', hash: '([^']+)'"
vid, hash = re.compile(patron, re.DOTALL).findall(data)[0]
hash = int_to_base36(int(hash[0:8], 16)) + int_to_base36(int(hash[8:16], 16)) + int_to_base36(
int(hash[16:24], 16)) + int_to_base36(int(hash[24:32], 16))
url = "https://www.eporner.com/xhr/video/%s?hash=%s" % (vid, hash)
data = httptools.downloadpage(url).data
jsondata = jsontools.load(data)
for source in jsondata["sources"]["mp4"]:
url = jsondata["sources"]["mp4"][source]["src"]
title = source.split(" ")[0]
itemlist.append(["%s %s [directo]" % (title, url[-4:]), url])
return sorted(itemlist, key=lambda i: int(i[0].split("p")[0]))

33
kodi/channels/erotik.json Executable file
View File

@@ -0,0 +1,33 @@
{
"id": "erotik",
"name": "Erotik",
"active": true,
"adult": true,
"language": "es",
"thumbnail": "http://www.youfreeporntube.com/uploads/custom-logo.png",
"banner": "http://www.youfreeporntube.com/uploads/custom-logo.png",
"version": 1,
"changes": [
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "26/12/2016",
"description": "Release."
}
],
"categories": [
"adult"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": true,
"enabled": true,
"visible": true
}
]
}

136
kodi/channels/erotik.py Executable file
View File

@@ -0,0 +1,136 @@
# -*- coding: utf-8 -*-
import re
import urlparse
from core import logger
from core import scrapertools
from core.item import Item
def mainlist(item):
logger.info()
itemlist = []
itemlist.append(Item(channel=item.channel, action="lista", title="Útimos videos",
url="http://www.ero-tik.com/newvideos.html?&page=1"))
itemlist.append(
Item(channel=item.channel, action="categorias", title="Categorias", url="http://www.ero-tik.com/browse.html"))
itemlist.append(Item(channel=item.channel, action="lista", title="Top ultima semana",
url="http://www.ero-tik.com/topvideos.html?do=recent"))
itemlist.append(Item(channel=item.channel, action="search", title="Buscar",
url="http://www.ero-tik.com/search.php?keywords="))
return itemlist
def search(item, texto):
logger.info()
texto = texto.replace(" ", "+")
item.url = "{0}{1}".format(item.url, texto)
try:
return lista(item)
# Se captura la excepción, para no interrumpir al buscador global si un canal falla
except:
import sys
for line in sys.exc_info():
logger.error("{0}".format(line))
return []
def categorias(item):
logger.info()
itemlist = []
data = scrapertools.cache_page(item.url)
data = re.sub(r"\n|\r|\t|\s{2}", "", data)
patron = '<div class="pm-li-category"><a href="([^"]+)">.*?.<h3>(.*?)</h3></a>'
matches = re.compile(patron, re.DOTALL).findall(data)
for url, actriz in matches:
itemlist.append(Item(channel=item.channel, action="listacategoria", title=actriz, url=url))
return itemlist
def lista(item):
logger.info()
itemlist = []
# Descarga la página
data = scrapertools.cache_page(item.url)
data = re.sub(r"\n|\r|\t|\s{2}", "", data)
# Extrae las entradas de la pagina seleccionada
patron = '<li><div class=".*?<a href="([^"]+)".*?>.*?.img src="([^"]+)".*?alt="([^"]+)".*?>'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = urlparse.urljoin(item.url, scrapedthumbnail)
title = scrapedtitle.strip()
# Añade al listado
itemlist.append(Item(channel=item.channel, action="play", thumbnail=thumbnail, fanart=thumbnail, title=title,
fulltitle=title, url=url,
viewmode="movie", folder=True))
paginacion = scrapertools.find_single_match(data,
'<li class="active"><a href="#" onclick="return false;">\d+</a></li><li class=""><a href="([^"]+)">')
if paginacion:
itemlist.append(Item(channel=item.channel, action="lista", title=">> Página Siguiente",
url="http://ero-tik.com/" + paginacion))
return itemlist
def listacategoria(item):
logger.info()
itemlist = []
# Descarga la página
data = scrapertools.cache_page(item.url)
data = re.sub(r"\n|\r|\t|\s{2}", "", data)
# Extrae las entradas de la pagina seleccionada
patron = '<li><div class=".*?<a href="([^"]+)".*?>.*?.img src="([^"]+)".*?alt="([^"]+)".*?>'
matches = re.compile(patron, re.DOTALL).findall(data)
itemlist = []
for scrapedurl, scrapedthumbnail, scrapedtitle in matches:
url = urlparse.urljoin(item.url, scrapedurl)
thumbnail = urlparse.urljoin(item.url, scrapedthumbnail)
title = scrapedtitle.strip()
# Añade al listado
itemlist.append(
Item(channel=item.channel, action="play", thumbnail=thumbnail, title=title, fulltitle=title, url=url,
viewmode="movie", folder=True))
paginacion = scrapertools.find_single_match(data,
'<li class="active"><a href="#" onclick="return false;">\d+</a></li><li class=""><a href="([^"]+)">')
if paginacion:
itemlist.append(
Item(channel=item.channel, action="listacategoria", title=">> Página Siguiente", url=paginacion))
return itemlist
def play(item):
logger.info()
itemlist = []
# Descarga la página
data = scrapertools.cachePage(item.url)
data = scrapertools.unescape(data)
logger.info(data)
from core import servertools
itemlist.extend(servertools.find_video_items(data=data))
for videoitem in itemlist:
videoitem.thumbnail = item.thumbnail
videoitem.channel = item.channel
videoitem.action = "play"
videoitem.folder = False
videoitem.title = item.title
return itemlist

78
kodi/channels/estadepelis.json Executable file
View File

@@ -0,0 +1,78 @@
{
"id": "estadepelis",
"name": "Estadepelis",
"compatible": {
"addon_version": "4.3"
},
"active": true,
"adult": false,
"language": "es",
"thumbnail": "https://s24.postimg.org/nsgit7fhh/estadepelis.png",
"banner": "https://s28.postimg.org/ud0l032ul/estadepelis_banner.png",
"version": 1,
"changes": [
{
"date": "24/06/2017",
"description": "Cambios para autoplay"
},
{
"date": "22/06/2017",
"description": "ajustes para AutoPlay"
},
{
"date": "25/05/2017",
"description": "cambios esteticos"
},
{
"date": "15/03/2017",
"description": "limpieza código"
},
{
"date": "07/02/2017",
"description": "Release"
}
],
"categories": [
"latino",
"movie"
],
"settings": [
{
"id": "include_in_global_search",
"type": "bool",
"label": "Incluir en busqueda global",
"default": false,
"enabled": false,
"visible": false
},
{
"id": "filter_languages",
"type": "list",
"label": "Mostrar enlaces en idioma...",
"default": 0,
"enabled": true,
"visible": true,
"lvalues": [
"No filtrar",
"Latino",
"VOS"
]
},
{
"id": "include_in_newest_peliculas",
"type": "bool",
"label": "Incluir en Novedades - Peliculas",
"default": true,
"enabled": true,
"visible": true
},
{
"id": "include_in_newest_infantiles",
"type": "bool",
"label": "Incluir en Novedades - Infantiles",
"default": true,
"enabled": true,
"visible": true
}
]
}

Some files were not shown because too many files have changed in this diff Show More