From 6e13daf4df78cade24ab422ee82b26ce72be883d Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 18:48:07 +0200 Subject: [PATCH 01/34] new_icon --- .../media/themes/default/thumb_mylink.png | Bin 0 -> 36313 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 plugin.video.alfa/resources/media/themes/default/thumb_mylink.png diff --git a/plugin.video.alfa/resources/media/themes/default/thumb_mylink.png b/plugin.video.alfa/resources/media/themes/default/thumb_mylink.png new file mode 100644 index 0000000000000000000000000000000000000000..8802eb5c6e727836d3954e4c09383324c56b2d32 GIT binary patch literal 36313 zcma&N1ymi~vMn5(jl+fj!QFzp2lo)%-Q8V-LkRBf1b26LcZWc5cjw>aobSH-kMYjE zj{#%Ty=zsM%vEdF>h92QU&RpNKEZwX@BvXmTv-0Y2e8feKUiqc8L!#)c+dy7qll`b zf{n4Gi=MsF2LVGHeIr5%D?Jk;Nf?*)iba#awOC@GBLB}B|dBEASN_3W?BOVMn*z5HabQYRyH=q&xA}2j7;$3QBU8-$&r^BWa+2?=Jl#C?oSf zi&|Ozm$ZYUywU%$_y4NcLD9|Dh+f{v!N$qn0F*dmlJ`s5atYZR={efiE85st{u9M- zrZ$c?4yHD?ghC2zgp?{~)`m7N4pe{TWn{P{tQ{QntPPALgn5ZU8tBZ-47mgt85u;` zg@pteMVT2HML0ManK?xn*aU?cgxCcI7=`{BD{N!nWMyRS_|I6w|1(zjKaPEef|V`k z%ECtWX3j>2qV_gcgnwP;GW*ZA2>eHV{~l}jpKTHRk7MaU#?Zef_P-_ipHm?8y#M=O zumv6b7x0a&L1?!J!8(Xx{pW`dVKNfJ0*bE7$89i;q)STV_ZuOO6Nz{2#cjb^+vv6Ro(E2(+mh32*%jZ#tc2|opkWWNdCTBoBkNwu#=;363l|WUyux=%{H_O zMp{%V4x3m|?@oEyi7!P&MTJElJ(Kq)!iE#ElToADiBEmr9L9OYDDyjY8L$wb4Js=e z0iaL+E?scr_Y?krq^`f;02x5%7_g)}|9;7klv@*w7RcZ9U~?;dw2Uzj}(+InJ7=!_L@Gg9F|WZuTLm(tJy9r$luLge)1r5 zyqj+*z}q6!*0PH%gWj0^bOGNi%U||%J@a@o^GMRXJwW>fJl-M&$YQe7aF9q>75BR5 zzSc@yt_nk^1O7D~BEo!p$yxX++}maV{9I&=`L0!yIcpc$Wo>1CzQuRqmDBbds`GNY zlin+Fm*(I?mYZL2cpZ2?h=^O-y@qnz)9j=TH_|N2KE>#DTOi|a9scP)1{B5cu^@&6 ztLuXltvhG27tA<0-_Q(o>_mtfe^6#(< z`6iTnyA@+`dH(MG2K@Czlk8j5#v#=+6+cl?f)47f*`nY21Fu(DL~d(z2TpF98fT7c+zm_C%?l5&w*jk!js*)%JyxTw~>@ zDZULjxkF8yo9cX|zWJd8%02$ea?nl`^}7p>Qr%%3K?yJnN{@sejrp?P`1Bf0asWx6 zboaVo7~9U-o8!GjBBkpu=DI=Z!`pplE6CbK_C391L2E1DqJKjZSyBTrUDMzO1uo0L- zfVV2PKK!-w$=br8#v5)NLTugl;P;c98tuiB(YuY}Jt6(S@AOynV1iFw+07$#mMi&D zs7YFW@N>j^m6S^eTrynFixJJ?W-7g#V)^MCl?IuQ2=Fq}psj85^NkS2W0o(oj5?bV z`iBv&a-Yu3{Y2^Z_i!$1W|>JO+!$0IJ5`kJ>9W0agudF?lB}w9&*X-p+ zNFiN%j;`7DBw1RzugWx2AZ4a+;VbrWw~M{G#Jg-bRKOw>KlaAiyU;h_{eC8wy{UEl z&iZR!lF7&ZX~2{by$ z>f5}>mu$JId_2bVhu8+eU2C}Zl#NRvls;lHKIZp&ne(L%)$)dp$Kh2)u859xUXo)K z20SAZaA0GBdECvN^mocL+7X@t_30zdo(gSeZCsj@**rUkb@nxWYg~^vT&Vq>5xw2x zKJ>!cox{29wcn577BdhbD<_4YzuN9IzZ*l2o#4(Gb}3LM-LW{d073!5OB_YZ9c~9q zh0ooO&Dq9__u9hPG?@6A=LUJhPu6Sm#|@Lj#+8Q`;ik9BW6z_DOi`Evsv4kK`98DvE8^DJZlkqyNQ3L)Gb2Ve{*q=dJnxnDw&>BLs9G`CV2hq)UDR&U(2Nf;Mx(bJ#>b? z{u<_mt$h0ngK+3?mT`(ORqGXmpLcE@UQOO70YeUWJes}g3M04HAu^c8@!LD-p-I5t zDaS`|gMAlDG+g1({+fu2!cNWkW8&2<6TLEy`S@S!I>|>U#c#*RV6ut!QO6K}m-<|= z9f*$!fx#OwXUJe4fWm(*@V#AX7C$JG75^UX|3tSwn)x>eKmhxP41oQ0=pPR74`m?N z{cGocQHFm@{V&Ag|4r(DrXT;<^*>7?{M*2Pq!|AHZv30@{4Y}fi?jcxMgNP`e+WsB zSVXEWv%m-H=TWRRh5S#M?>PXt)}U2%#+~R!YCG?lFP3FCES77}c*D>KV6D?*Z+p$m z(>(IfHnWY-UAorC41~R=hQ2d6FdNZ*Brj#)NlBs2RvmMLEvb&kkUZrf5^rJRSIT@G z;=_dAeyw z7YI|BaB)5xc0tx3T_AsAq5BwNbdglDA^vG63W}sc#p|t}?0Ag(zK%wtO0{N*bPNzBeutyCdEKYHxLuE_vu3#dpXUWl z7{}wB_xU3$w$+-tXvXivn8Z##ptnj8V+zaUaP>zYF%lkEz|xZ1&ecObI&HK7#jx9K z{2$6LJKWNW3IjVkRCeoCh6=N+SzG)Y`FZnu5e1{?%l4A{NR|3+_!6>+sJ}{u0#B1L zEn%(_EWq7zAX~kjoR2p;HYiKSqzeo@g#OqkrKEJvmuc7?&C{mw`>gYb%`v1mIRXD@ z_e`8Mh3U+aJ`ZWklm=jf@Z8TJ0AMqwT@vmLkS{VEO#Ij%5|RDmo zv)OT|l|bntFTiC-^SOn0;X+Cs?Cnn(wsJQ@e}FHEy(kJxa9oU~@ZKl6JsD$@{@o9V zF@w*Z=bHk!e4dNXc>&Nq|r$*06i@&YjmXr zAI16n1BT^<#IsFe!*5Hn3Dju>Xdr_@4N(OFW-rup#0|IT!>$mrC|8`fCUD{0`@&mu zI?^U~!+7+r_y;8wv0loA*YP4_8m9yFLWNFbdV%89liSiL7ZR@RxsT?WUx!x4t#b7z zV}^ck+<>|z*zB>Ujs$PTYyY@2=HJ+H*F8aFh{>pWnGY1DvYQUS9d?}>2+_!W&Zlz4 zWPFvMn=p)1(8e%c1HBf`{kapRvp$kOo&ALdIfLEKxzdG_r+{Zy6n!r9WLBW`EkPZB z7|!F=4*_Wgna~2tla&T~Mg_G#NE0yr$`^9)hfY==4u#z?NrZmY>xu9wW!-2qv6H!c z40887y9##ejY{HeQqgAP-{8SskJ!3Vp zd&}w<*`mZ@-SR`QFA-eF#-=Z8!d%A+v<|_!55Es50?K#eCdKFX zTwZ>$nTl)}X-MsoW_Qcxmg`~BNt!*cw_1^&MjFmgqDxF2#$mcZ!b~lCv1pDRGOK=h z#l-J2P_f>!$!zm$RqhdPzpPa5E{C`UFoQ9>e_%?CG508Z`kGexb#wdoHhBwBMn>K< zmdp>Diuut_6K^1OCv}uL`YdbZ^S2(uh=M>gmv6rFmzPY+Dk^d%bBC-s+vZ5fDdNqD ze)3vLUxK}kd!E-MHjpD|+&v={lg9SW+vqv`=(QWa*A+u((C9Irlb|IarRFC4QYeC_ z+K%6kH!7f!3ub%v-yOl^7ii8Svq@0+gD=(}4Di)WKakO{&viV`owuG)MV-X-E)3?#Zs<2G}`-2|DM zM7d^@9}c@UizbPDMJDZ{tM1uH&)r#8FEF)!WXsIQYT53)YR(iu(iAeWakv=FNWhRO zRCUVnavi*dg@ue{Qh7$(m4f<xb+hR~ahheGhB}%MqN2lB*h~qGx`dj;$RP%ALSa-F6fDf>aRY2jDDTc=jAKj*pD~@$z#ke`6Kn03IG$-qU@mBa=*wh0b*rUP02jjHyJ>PlAV<4g#(c> z1Vm8*jdsoy@q0R%Zp#Q`@Qe?Y08pRd$n1MY3-DQI7CY&+tIbg?r&viff?PBa`tGA9+2-NI z4k6KFb^-*N_hn5+R+8WXh|T-0-CKZN^ygsn6QsO^M^^PZ>asHcQA4?a_(Dd(3A zpIfaq5D|_&(8gag<-7bAoW&JtE|$N@2=%=g_*hKmJ*d^QK;z&lc-k2_?c5lMYd?L$ z42VVJAKcai()^g?3e|Js_S&_V?7x)xGVh zx8`a+yxYNbbO8T$=mmfb1d?n)od9P<kMeW8}A{{C*ju1919eE+Q{CvFd z2wD=|e)X6oUozH+deo88v+{=az^C5gZQ%RK`$J_@)oa?E+#td4rMHlXE^{`9VFGd# zt9LEUKfWV~z0?bL#iCrf{kDK)Rlr(7&jIa}aK&NKsuv<;D=pL8Qh@_*q=xL+C*vo( zj(Z$&?c;u7eYjd$TxFxZdaSLUzNxRwq>y4q@LJ({>ka&`&|SGq5dpG(3sT4? z&^E}ox!+C|5xI9|7`Ul}*G<~5NOQZl!pJ^0Pb(N!RvHKwX}8M=s~O9vRbn%8S=u|n zApt+dX)e`Ne$RJtwqA~4UTo!;^b@QmW__O7QVZH~PzS_TX`FCzq3)W3vVdCIt#w4f zc*gN7M1KacT3t~!btpS7Rdk)Y;1)VZ#AObvl1jdTSYCq3o#ICX_xfB(`WwYRu)1N2>)Q52gCa-X~ zJrye{6d({Q{kV$pM6l8tizcv;0`8d&D#1mXn%S9EwViz|hJzo0dsv1B%otO(yw;$E7E7~8SA{%SAKsRU7KfwQ9=im_H@`F*e0`Rd4lan{D&;aO4nU%<#jw$L zvMSUU^#hEYJF%ak4?LDE17yl@;BD-XhFj2ee-F;D_x-8EB+J7}Ix^nw_G0u|H z*VJuYDbm~H$JO-sEo~73SS;!iiSD21N?0nx!$>+Ro;Nuq3JYc})wgH`y8627L5y{*{uduosyMEVv@;BuyH$Yrv740Tr@b}!){)#GfbSV~kdeN>z!lg;s^7%WBj zNQl!#7nSfb;idh14zleIhW zye|Z%D8JD64CoJA@vCDt3W6F5$pW2CU|lZLy{-69YQO+4k2pPtOjmc%X=&}TFA`<2 zan^}i3v^qz3bOid@mERI71;#HoGVstz?ni?y@O@6g8{56 zbu4~E%7nqD-F`pCv2{|e3;rkWOa|aNN>xm1U@HafT%~#x!mu9I*QhhYF5gbh?UTxO#>J;0166Vsu+O$73}-{DgS`4J!;oKl zox}}uBAi)HAmz!>QZGYTrJ_{Mm-0@?VB}bBC=vpgQEX_O3Q~G3J)bz~)T-7y*9NKeU7Ze0O3e)t~;#{@~A z8_Bm?YSGs1OfAJh>2*a2PYs09WwoC%)vv)b`{)269CtRitfX0@B}Mj2dMEyXgt%q+ z_?kl$@+3)Fsb_~O`lO}!#o*VDrDxBQlf&af){|p3p7V#dh_@$^ zyA-gr{a%-KQ5@llNbd%Zn9GxRwN}iPgKMtCz`iV;Z#D*{*2BPN>dJo3N1G9q&K^g8 zEHr0UfRB9aIlCDeAIHb{dP(H`Fro}7ADnBc`PuT_&rB_;?uS z+O?z&Ne;O;?1)bL>PC!F%ZU}cfiE3>=ut+N|3@U=a?pUYl}MLU3x_71%hH>azT@&D z!mZZCan7@O;IvBL@Nl*<@(|?Om+p2>_we_igd{$vjP}%B#(CCN1sAy4e$~zVDP6pP zJmP~8gM#HRGFE?aUi8Cta~e^RdHh560lCHqnvL5fCr^vPk5h44=8CkgL2$Td^e_JY zBlc$Fw&NjT`eDwV~J2XsB(M7pb8XswRx7PYqPD>a{!D}7kR2;zu%@a1!x3{T+` zkytEVa<-T#n3%-keSi*ho!3Ue6`%$jpSXBfRA$*UOfT=gB4<9^5NRwrV(3r zj>#0Q+Q+bIVO8D8O)>P0h&NLPyD&(2j_)KE$x+bOSc^9O6v?A3q?*jR zzCn{5EHfup8eTma`j!8z21kmnBqV#riazg?5qhulEr&OORi(nxhqa}ad`eI+-zK9X z;bU6+Ytdfiow8c2QP^X8m2QV&2i1~sn=W#129?mT#7h2`a&~?<=#_=x{*F=IfQm?4 zGr?S6k^7Wu*hWDC)zm&ec?!L<0Gsf3v|Vod5V*z^dNXjkz5;|=+y)73ASw=!1$eHS zZLGoJyiKlc_b3Pg&jeBkiH3AMR+ZDIdG=$AW5(2aL|h%sE}E}elafIdIHJ#g1RYaPJflYUf+r_t4ZNi^={W;7yDM^0ikrCY2!I=dFZ0QLyqtvxTK_HC0c zGWqhwrZLX=F$z78fcG_a&{he_4!h#Bm^RiI&_4*fAIk-|Edi z#nf@0Sfk_NMV(LoiH0Bj&22aMt4Q-0(a6&?Tkx_A?iMUE`f)D3$1Aa7JAXopDLy=A zN#PC$`SklDdqcMO(b2UCWBi?>?bp2G&6kE@*R>gwRE{)8HA9?+F)pxR|4KztSc@8M zmY{%~%GY@!&cT=(ZD7F> zf>S&}l@?lJn>pCf6~70Ph`i66!1hMD5WJ;N`9OJYYDWkxaAVZgCK&DdKp1-WhZGCi z*@_##H5Bf`)HxefC`|iGaVX4~c({ios>vfnS$-4n4ppeg@?yOGCe(~ajOg@Fg5=;x z)Rk7Zd;|9ZtN{K3iTioeC=eHklAD6W#fjg?Gzi%ZVWapdg=TFY#c9*5-a{gr(WNMp zT&i>tUR2?)MW#x>oLcIAgfmBde`{5Fx-4oOFN^}{c)o^OwX+KK>LU44oy!EvROhRd zhB3}m9wSrX+e4~J+e7Yf4I9O<@m;8m>}G%3usdw^SGNKyIbJ@jh~U!%TO;X~G4vl} z5F1vp-T2ezcq3KNg?Y+)0ODW?!$|1>qFAC~a7TSJ-3 z>1vX`KmkjAnJ|3h40bCCGi{yLt^ml<0ZsVlQB~G514-Ed61@VXtTlc}+tE$p0k0y#t-A31#t`w^_xUfaP%A%qZ4!*9va#9b7~|3g@a1 zS*hIFWVUfV6LmVQ`OHFXm3E0OEU*IFnLvpkt2AiyW5>R-8K%k`w|zr6esZom^B@@+ z`tGKBcP6RSCHzX?YKwua*}N+v9Lc+7bk$J|#`~CW$h{nS9%C!a|5dFr#CgQHlfM8j zf>Er({kz)80(5zR8W=LRVhn>bZk{Kw*7Q{3Gc0Jc23FGe{89na_UY!2;YNjA)#HW` zjIWw}hxL=RQ0jfZ*SARRz-Bce&A|`J?S*)OVi`O?D^8aSBo^`AFjkT5>MpH*^zSaC zo7Zn|Ti9vG^CO17MtSEZm5b>R$g}S}Hr}xtMwFL+a;O!ObhWmXOu{&?REn;?7O`QC zoX(g0EMe-|iOW+nG+PmA|4eM(>mbRgx1UDUm+L-82)rM38qlV+B7_m9F0D!G;J!A^7gYM`8rX*j{w%bJ zio{_TltV{`flV4+d$+)HnUQS8rb#$gLGT+Ez>mw~J)n1na(B(j)89GR56@4X;rvxqPHRPBM#b*tKt@i$b+WD`s zm6XBP8AUtxf!$Kw=Z_}i`bX|fCFT;L9sJb@%c~j`EkvpAFkWkQGtSPVHv+|t#Z)=O zbB5R};Ao)oF|4oicRg9-orAcQMC+FpwO$E8R+zMdpc*M1x3UBb13hZwB7bp*5Kh)uVb~ z_G-V{O4&j$W&d&W(Vupiewvh6e5K9^L6OMG;#%+N{wLw?qZf+zMIex%l!P9W07~DEFzaHK#O`~?3eKhFP+2BLD8T>!p}?siL3zb zST3>EsPk&E?vfXx_sT}InL>#L9--1n9fwELs8$`h3sc0v24qA!#41S{JXO`~rkT)` zmgSTCuVLBnIPtG&naw;iK zw^$|6ZP5@ei&ertW&tX>NMf1bVt)+04A2Aq=P#q5JfzMegD-L7cgb#572&TG?jyH6JuF^Zf2gl`r~Z_gX9J z*1AmS{V0elxRdE|#pv32`n7RY_?OGK?%7t4@Ms-ltW0t@7`+g%C8$|T=W1aqlgM*O z(~eL9H30W}V%1n%x^=}orMReHi7-HKlS)PmfrwQs^Xtps#cbgrIq(pf4Nme=%*`|I zBQqw93h|VC^0Q|NJf)at78~8k$(~Bv9;`1G5#|`41I~jLnli19AptrMt45y3=h{5& zUR}Q)y3(@8D;nfv5;%s9o)3GZi8WNLQL5!Sn?)8%R4Lf8Fg`~iVdKA(tT|17auP)sK+^Z;E6f zE;{t(o+u zTEDFn1N9kRD8JQ`6mmSFi`8PI15oEICN%qBR6=W~=PGUl%e-BrT`IhBh_4k%F8a%u zd#tnP?z9If${}bj{=f|#R)2H4xHTF7U1fD;%s@1|U#!b2nvV_xVAlHDP2=^6Pk8Na zcAomzIHO3dVs{DqQ9c4A??wV`iiQg$x-FDQ2ZS7=w5T_BEV4QjYBi3rA^gc0Hs0<@ zrqf35a;WhEkCu$N#s~(_<+0n2W3aLW6Td=J*P245_9Vm&4*6o%iDDX@^3Ssa3(4bj zf3FSe#!Rl@JbiYS5Z9r=BiHj`V!j#|ZpjFf@Kbii;!&5cHpDp;DV1}gRD#&nssrC^ z3NX&=iCw1D;?57z>?2>wY+dk{ppM4X>OAzFwC+Q6M^ zK1p@wxtrhyHt;zh>8!Wwnt#&JB3&`PYsTr-+?cwXy1m2lSMThq(|;|zk5NagQ!4Yl z3Vcnf>yHjtuvWii*15OaWX&9YWW5DLewfcG&$Edl)`e(gHdun0b{;&F^r);u@O6L znCGYC=>|Yfrc%-;+^e4}?E5f&@JrL-g-iK90;ccav(v_PUO+8>?Uu4W$? z=-l27=X!5~c$DJnUnB~9mzm=G6AsB5AzN@Qa#1{TPLzV}faJA2f=2Iy!VJaM+=N#5 zMSk6N+62qD*Pyme%_;7N^h$ZOE>Wc_$p>vp>Xgw*Q79eSfm`T2KdJI=jjvjr8)r*l z)mA;q{@jJdvcoRSYnWbR;vWsW3k7#J*9%u##2}8AxX*dqnvh4 zMo@?zY_?H(B5&ikRAemvh`kWFp+ni_{i@ADz#m`iN6_dFr04cQ^szIHBsb)&v_K*n zk>%febau2V>C$m%YuE92pX`ndVIu-Z_pNIdoXb-KyTbW7Tz}>J-nEsS8i*;YX|Xy z4U~1NY0OkOh=Rf2g54n=L*Rd3$uF?2D5+HA&6 z&F*@=ORep*%am3-BXD@P&r{ms;<)=H!$c7bO47GcarHfZg=}bE%E}?kG-5`o%Hz{M zjMl?vrP~iH>Tc2eW;i9go6eC-nMnlYP6GrnV5NznGY+@- zfjP$KBeg7ujtX@9Ust~DSB}}^4S1x~lPBd$eik1$%L?gqH`H0RUUmaA{W&vbUdWTv z42Y8z;%#!wBT5}=;o5~tHY+SJo>%QHD|ETcUp7``wR$XVpRU91iy`a1tc|h zW`IYe#3{D1Y%y>ZL?fvlY%YE(XuI#*A3B!o!Ie;88k8+qY5FxGBH^OE1Cd@aZ~HV2 zr!Dgby3Fy$Vy%uW_0bAL9HY>6W{~ySd%2d3NhHy}l3)>juUk^TNl?CPz`A^!JCi54 zgQpVM0LPYa#i?jHa^GF#9wRn8j8gL_)_ybkI4Fz#3cXH`(2}UeQ;34IMl)YRnZMW? zLrJMGMWJu?r;}xeVn5e#l>6kFDxI;GE1#}RXBx~8fNQA$Bc+BXUIeLQE{6}pMq&T%D4|_CKEC2n4xcbi9b=H(xw(zUG@W1QAnKT*j@D?~6U*zhPFC+8SNu{Q*sEi{|pfG8VTVcKYlB$M^B z+;X$Cf+F-EID%D}z2|Cn8OdIPas~ibt>mVV>LaJjwgviu2x~a>V{WC^va+9Q`tJ%>h#UXHiNiKqD zUY9YiB-5h0fL>rA>O?d-_t9sajdf+(m)WrRJjP*`lE8I+mEW(Bh!HH3UR>bMPq-lS z`Q~mX9CztIvV*YrXw?z`WG1F&JdE{{ZS(5^=mPyn!?Nb0MP^K!DdA>?SjeLJbiSo& zL33vTAVa=N+H@uj^ufI_B~{y-biqP>WZJ#6WYVzEq%eAkchAiCZs&7n;$#gdJ+iMF zaBXVzvso~n!S$wOtx%VtBBXhg>;x`Q`hb*m>PAN-T+0#;j!N~ZljcJ*$|)L}l&leT z&P3LO@*Ri9-?3GNS@4b8!#FYR1#?#^)(v5@<4XJl%_i&0?PqF%=-0L|Kk(L&QSamw zr=W$UASDfPP-sytMB?R~dArMs)Yn?m#A}H;^?Xl6%Wbx3KmebvcKrm}&vFIY%AjBC z8w}GkU};m}F5LU|P<`;%N3dhR$eJ@NR7y}seqfikg5LRrVl_^&lp_T<*LNExR>_Uf z}{`=mzYInHU}=pAyDbao*T30 zuJVhB=`8FE{xr*uF0e?%x|5?--2XU2ZvPBPev0T9^r`JBn=jy&mRT-`?9DREc%<4= zdN9Z|p2cuwzYrPKO0~b9OLHu0FmM!8wIw#43uD9JymBRUp_le{L1@$ziU36&% z6;*FI-(5{gmP>*~qk|bzWT62#9M&Q}cUkX#CcGTx~B)@-C5~{9^ zn$Dm2W$WmfWGeDTZ;WcG5kmAk%E^%8w;pzYAz+DD^7Y8;}a%U_U2 z{SS%(hrb%bf5}-@R->4{tMz1P^%1 z#i09JT(VrTK@CUSXUq(=n5-#Ge2-kL0>80L-aoMrnJD!l%gY&h^Q=o6K$KZWUz3Sm zff`OM`xILq{8FQrgBZeEYzZ-ApKeO>x+H*N^$}c|2tI;QU~ds}43Y;7ZV{s%A8~K< zV6_q;^@*So?&0>(3qLu?cX;Vji&)YwHp z!^UCh6(?R##3-psc1Tu%%vFIQDA$oh|78?|sQ#ngT8BpLd1>ZsN?WIf6uTezk9+pdS1g}Ti=ReMhO#_{Wh^PPAQHFuY~zD zZ;tY#WH+h}j0Axbco38Ma1n^@sXy`2(|7^gJSD*LrHycsig1x(32h46aCLEUYu~7( zDqZY@W5w=j{}tKubsQ_Pt^gS&nG5Khm2G&Yk!AouOY2at-dJmIDxn_AF?VdbB}ci?O^&! zdR=;pF>~x87e=iAjG`ZWIwr}WA-nmUtYrQOysF*@?DjkDIt!Ulv-_7{O60>xvti^0 zyp{5NoCf(hnKwCeg>tV2DJrHbBsm$;D7eO}xR0(4%9^Wfiv33u^uK+MwK_XE<$8zG zUem4Tgb87;EZK=8KzWpjYtA`^h+z4MYMZZmZv2tqjR`E06-;3xNKjfr@Q;kn6hV~W zgIv1)HuLZms~T@a+(EC8^sq|{`G(I^e5`9+KLVQ2^W69j=-Ve-#mijiDjZ4sBPhb?gZ< z4u2S*Sg$msR=hfu1)6pVR^RcZUCP2%Y07%>={jH{qf;DKP<$ z^egaiMKHR62}~n#jcYCq#TKa9Hi0f~S#6e`mJFP%cv^`*vgXwLH&geoLPlAum`=mBS;MvIKWgv1cgzJko87tPvjbArxKfm zsBF0xaIwS@*OP}MQO@gU)8&gkb7Bnm47Wysq3I1pgs!BxGDi#=BM<(Zt$U^f=Q1b+-e4?EFx~d8R!E(Lqq`C;bbYS^Kr)F8jPRi*+Xt)F;kO=g4i~anB z@VGj!M=vgu&D~Tj-uX#`row&CT6Zr+2qPRETb(aSVhj5VswO<>CZRpLtq8EvQj+$- ziA7SLFrf}L1VfQYqU-}d2bt!~fP^K9sY z6$ZyCzFJFWE?8xsMcWrFeYnpFUMdz4gZVA+e zl&c@%1G#9>a7?eR!n@m{X8gUa^M`8R^n7#p-W(LNU!=}7xreW4fK;jZhc8+eIx5BF zA){BkBZxEggU-e9!>PM0&X9yJpA8pb6@l&4Y;Z|Zs7zq80pK4Ms$AMb{mAQ#3vjI{ zX9(l998YaGhR9F%Zx}zDUFPw02#jycTB(80PjTuFTnFVKdRASvhmCo>_LA#ZE{-nI zV5~@ZkV4V2K1gm4ezwvAZ*rE}w|iZ6zDwS672YV26WLP;##v+HO&)xJY(sg})?H7? za7!J;N=X}vJJHq*P^W4j-AY^U6`_Srv^P!j=Y_6J-whz?&csn2sse!etmEwP2dlM= zh>N|$RG~*WEN|zG5Db}{JW4%LvNVpc1Xp|OWl}bdMs*pLuI?lBzIn7`_RL8<%VbS8cxJz+&cXxMp z_u>?HEADPZw-k4W;!vF8R;;|+=ll7+*Z#HFnVDoVnVIB1NzRN@P0jS%JPuFrGhAcO z=b{VOA`JoSPJ%jrW64EFOxwzc5L%oTp)N8@!;MFRhbpzc3O!QvH31R_2kpwrhK`|2 zhi}SDZiUy5U-#CfpI0)kX&5?bSiU~|(Pb&B85-iN>wjp&-|_a2Gr9Th18ju6;p0Tq zp(9KWD0s@mxq=J9c2Qz5VNcbyQ3!E6m^{Apwo;_JTDU)QBuY&nU;Dy|;jWg^UK~0z~|AB#)g^%5IGIuYE z7+O9Am_s>>$7-!AOUPNbA}uflNV5Va`1_e?Ow{PDXB4%ey~VdioUOmf6!T7hgwHS4 z?ZX9K7r_2;Mr_Xtn9h*vf3iJH4{!razmSG^b?zyu9lC!O2R^1aZ~C>VBJ$hFQCJcj-f~h=?*O2eVA?$U%_!||$zj?J8mv`= zD$G*47=j5fXWMmox%f5k(Rigs$agRu6_PiA}!T=*7wQ}y!Ck&wvhggqhg>~&9MsvG*!E2(8mKA*|A(X-Z zVPTYj7vg~=XF2(YjICZL5v%X`ZellM&A-22|WV{*_C;;CJe{TGC~?UEA%)gBltq^NJ_PoTN%nW5Fk z;HUC9s5@H5Nuq{hwJh%w+$sCV<&;+L_Vn|dc5FxindQWN=S!cfu zrrlg|92-C!*kAuS{dn)MeZktIoo&>D%+%3+)B97X~x#Cg2=EZoJ)vi{a918;IpG#Vn$yfDTE33J+Dqg74i| zE%xpGY9wxRZ_VpOn}03U@GH3i53s4r_aS+dyRI+wUY~mPXY!wkO%$E6pMM^|6#R%u zxAUtK)2j&Xc*(3#+}hqaOZ5HOy2B;Y*!jfwDv>jVW8*VM?I(*;anD(uj;J zuuO(P!a}g1Ra9cxR$do@Ywp)vtfEmL^wG(!3Qh&+>G17%yKtN3Dh`n`_10Hyl-s5~ zZ-K{ESOu)f(6Bzd=L?4L91&7iOupk%fqksbE;>%KB2Pkx4Io11G8vLW8 zV`@kGl0Txsfi8RCsCX$|cKJxTAnNveMGs$d`%-QJ4FM`npfQEu*UuW~pv5Gc{6R{h zo^)J;QZ*z7doi?_Lox<@@xD1aMOnwR)gb!7LfeIm`LMC#!bkDkO_UWC{~HJDo3f2* z%eU-89*bRT^yhtkS&LuqTY2jni-gEJE-R2IHc{l&aoFgrT*VF(x6@bbzxTa}*Hz~W!$g}Y5p7rLh5e{UuwP)gCpMEC#R6au)0;og?L)OD zpU8dv*6#HAm*{8DX3dP+(qE1my%ZNd`cqBkW`!}Z#jOwaYZcz;oz@K>58`qhe%9dJ zA32|%oko4p`EwswOtLFjubz;C3p5Y=+z^5W-GUA=)YOnV?J`jT+|$1_tVjx6EX3_b zX#tYp*1N~`?@ z^R#72dD+g*Fext64Psd@&7_^sIleHR!gb{O(oYC2tU={l4alu_T*2IJsN?PC^&F!~ z=-t_1pD0A)^u2dRZdcr;xDBK^ciujRHxf8;A7HsKql0w*cjln|_hP}i*`x%+v*;2 zTI_rZ;a`o0g_NqmL;`Thg0b9}u(ZzXmMeGf-`_7uuB#iY$8$WrhU5*Oew&@u>e`ko zT5u9tD^>yt?@IY-3qtO8>Z$rYYdqQdjFPSRzStMGKW2XqD9~#p|LPbMP{XxYz_)_? z6y%HWCvn~3^`Wq5Xw4!7x#Aht?z-B>KSyXZ-UL zhq_0u#TvktLDw)D| zy|CvJPPZlKcJ#x>=+8gb*7au7$#pGOo3#T^3pA#g#5yb+xk*>T8n%_2w#>ULXqvHl zSqhpkPvfoCQbG*WPtkKy6tUR-)lkjZ37UkCMQRAJb>&QL{sL(x=9=hz1px*+rHm-Epr2X@EF|Srbx-3m^e(w8F)?hR!8pT%p9X zt?~m5(0;uSBSI|8`7r~h@5}fkjo7}YwKY5=;ET^+Q=d(p;&o8%i8vh`NbsPC_8^bG zEI-YaobDjn;|e#+LTZz9$!qUbckG>t0P}Sw&g*sJynN`lhao>5eN2FJ9YIS=*Aibh z4iF&8-yhr!?iVoI)e9Ztj`bt4nX8OF(W^bkCddk*K_yh3O(tWcz)=GO?`8Jwn^hl@ zED*Do*#&$W-=LP?yR1pL@8XPo_K*`Z4Ar%Hz9EKYd+i17Q;AeD82heH3Vj}6c8zAI+_GUz<=vRJ+f8ovtL@S0;R7sy)8S1k_tPR4q4qK z(3og)_c);bF;%#0Bl-u)7qG$`;HXbU(DrDg4PZ19-;5y@G7gPm*ikp%7mh}q(^tFy zw)^0DrxspfS4oc;EO54$z0UXwxus@|)phJ82fExn-=BF?Ykvf7x)z!E--`ao_tTS; zDQkYR)_-D$T(_F#EQkIw=jQ{^#Y=efTZh;9p~il}E7?%Vo)KG*$`7dF8CH2+^$kf&I569p-``5a*-rgTZ*n zlw*@F-O8odTK{)Fxt`ik`rYVfR;8Hp*=-BVcD?MMKpK!yUu`_5(ko7e@<3s4C5{@} zH|{Nj$x~p!l%kfl?Fq#%Y(u>uv+P68oN7p)eWqpCS}UfyMoB>lzpoVvVMfgq6B_MB=rHk3`S8%T);;hRIOqq2=GG zy0CoIxJTXt1_qo9QhY3A@e3OjC{6h?f_px%b;WJuI)|@nu}2c`0ZOAlx6Ntv@s#Ja zmaywrb+KCR4!;m!VHxtXnl^iZ#2fC~yLo;4->}&&I=U=jgdRU~ z@Z9r$MUFzcN=~pIp1gXn>U66!jxvCZl=c1JE!^oav&bPDm3}ORkzt%ijx?<1IBl%~ zWR6#DZw;hSr|_h)VE>@OSZA-+(r0F?(UMH;iTt+Ud|Y#f_(e}=QucKghXs2|cD=+T)~9pQO3Voc2Q8AnoMVdg2?&j7Q9XUf09dR!7iN zo2|r&Oo?ArqgaM~bUA)LGt4Eg!)~xkSi%EeFt-#A|6)YDG*}?o6sCYmgj;syn;O zvRVIIWxfEquP>duS51Q=;I*X$rZ68b&!j{Vj@pLqlM*bjQLO;xkcvnFj0I2N>-V#f zqFjGdRkz+@+AV!nHZs+))8)*9K)U&^Hcc)m5SDrkmr9}f@#sNcrx_IVw`U;!F3d?y z)~5;PC|sH1vFlgk#m}nkB=YwfIpxYw+_N=M4UgLCYzKXFPL2Qhr4k{d1&NHrvg+=m z8%~=w*OO8A()>OV)#{($sGLuFf7VdH*>=9$;I+qla3Qgq>q1hRVG1~Hnl*T*(6j=> z=A;VGfi6j;+J7u;@ZT8XW4ZpC%4{nbaH#vEj+y;%*jObTezKD8i?b7Taec`apJNMB&q(g?JghTeo_qmGj3?hTdq#m4M0|~hE z+=Q~bic4BXP3IlEr)ttjM)UGJiLoUIM~*M*lgfWeh+()LLf-?1heGM~Xnt27hmMXE zMD`kmzo8)t1eXxzuayu~AWE)4=S%U#bYbG_QW-#igT|2AS57cjSBEIPUIp@G+F`7zh40p>3c<33U%=bS~ zWwFnoyJ-b5|eu8PCzb>&}?l+h-Fdr*%`l*Q{b1VMS_2Kq~9Q)=QoJ)5P~(qBEB z?)_lJQrS*9>)V}jU*e53)fn*WPM@T{jG!E*X5tC9xDPL;)c;+ZxoA;HL8IQ@`j|98 zeSGzTjc?cz*EPk|hz0@Hik;$e1mRbhKe5Ufa`Ld>8QJj7XX)p*q+E?*UMbZ@2sbWK znlhjOgoB8uqYA^in(8exmLSP2Cdo3)1}w>SU;z!Rr}7khwFi9ZBGpv?SBQ zzg(wAov~{iM;sLyR0D5pb$;*RkoZy7rw2F~Pb`nSyKJ=xFX;K^tYAc_PoiH?-|gUQ z0jo{1XoLFPc;Adl-C2*gS;YzAYvHyTpC65=fjngzks(R~@>0t~=jy|iC!L<`=Q`$Tbq;OQ&I>u=5ET))X=_ft#H)bs`&D_K%yQ&; zK`1eZh}!Q#DJiObyARv$t1<&8;MxISO8Y4~I)N{LK3VjE;snvj8R@S5i4ikG|y)7&9uOx?rJJs);SHX34Ov7JnR zTi}gd`(lbjUoZ#FRbmNlpTIA4r9bo+XK%oKL`E1aPGBQO9suGG-Ru;vI;ehX z|7J{%Ec7UA*0)-yo`xLf1=z%a`M>D+_{l+T?@-ixIt+QPO$KV@oEs68as-I^*~4T| zvq-gHDU4|m_Q4Liq=`OdsGkGiTRTaI&_~EWSo+w>cgSjilqMrNu?bbhbJayvGVQ`3N{#EvQ0$Mg%4t>Cm2k+ckc|#XX`igBO5s|Wi=ZqfiqhMeCp4;#g+P8;C-jzuX{_ee zX?%k$^I#jJ9Yl0p>IB-<%bop(bH8g!w9MqY3#Jq<6 z#N-n9rqS>uuQ!^E4XcyFnF0YTHwLjTCP$&UCas^#x%{bjKVJiMzLJwi>5d#0Pj*(+ zva)c;CXHZSr744>mHUxe@4APd{VZb(KN3$iIpQ#usa3Pya|D*X>q{^rist*md{YZN zGS}Qu(R3YL3IQ@n>X0%bAZJNSK?=|C}DKNlT^g?}0^HN%y-U z6yO6M9RfD8iK1-7O*wKV-GUWa#eSP$Upf5X_q@f(E;3vy4N{~-81mKU_=u-6pFe$m zfUBt4NOmYkXoq9N;8`NWc#n6xQBCyq)WmT4F5NM=hM!hbrc{r<)aV)#Z7p0sGge{E z0U9&AtE?f1caT&j1Rc6CX+a8_q)uR_zdE>KWL!f_Gh|&Qq~Pd#RJiL)f{vof%Nh5{ ztYS%^t*`5btG6&@VZv{_y)eP)Tt&61YLVF8DvZ?a9XE)K8WTS)Jj_uYSnXas8-Mgh zSsudcyKq9^{xpFp0278!3Uf3O;{Y2$;dg`yr4vFs4m<{$s}~=+wJ9j0zZGj!>R760 zXH({3MR#*cVzNsJdt;X_od}7SDpA{v(`R64%4=T`ryoWe^cBCvb)oAxaCW-Lz}dq) z;PL#6(PMWMmn+0fZ=80!^E+Wmz;Em%P|Hw#k7>5PQP2R-ql3qFfmoa=lzIG`*{<96 z{LlF3inat1rwI<3VT}^x=$iGz&Gh+k;eJb>HlK`VzLF0o$aXq!#&aXCcJH6fltNIj zBS_X0GEoGV+oO~aZ3_lDqGhw`l5oYR*Y5^&)Pzn8$el%M!}J5_;O1Hf;4~UNyEt`i z<>~pSHJz$`qUmGB@yWr!;kS(Pz*BpULK1-DG7|FOn!b8d2B}mbj!PCQ5yxbd`TyQN zlo@NQkSrUR3sZlh_*$adD6qG4Z8w2pn7T-8nO==7lASdGt-~fkt0ze~63FtWXHuro zc5YZCb8cp{v{K?hE%K|qI(8~-lMJ-8=a?V{i*I(siv)NM^GgtuNlA!1FQj4)v@*pz zy<#5;^vD-IOYux>jHXewBgd4lxmMF4;}Y(BP;!WD-4w1kT#qOtg`=OUw7Y|*cQqcq zrt{ijNl>NHAO>q$slqn_I^OZ#1B(SX?e*W{BcxbJkO;#yUUfi>f9m&;KyXPp1bn65 z=^^=A&)w-ISrqanK}5Jgv3J(QP9Q=(9q5-#hpeM;L|%R)1YP4}%raXL+*(&unbp-( zx9o}Y7C&9iRJMed_-n_E9Xhva<++?)@wrC0Z=6ltNpv3c}a zQ|$%TBmxH6t)?u@1dI9KbyxVcLj`HbHMvV(T>G*=V?RVge4Cqh`ud1LIDsbQ)pFm| zC!P;%$g5`y8s-?dsm}7V8hF*bxfz$+7%tQDqCb=5*rz(^lk12~8I!->cX_?mpO-c` zco&LvHE~`hEI1jx6*g(P3z(I zD979?8CkOaSEbFTOYhnF;k@!he0-&-D};=82O&i%Xnlx-_5eW;WXfF}zJ&ZL$NlN# z4uxwyAT~jgtEN&=S)Nr0fM1soiL*f-=S&Br({X!PQT&z&rLJc?O7U&c zkh_P)OD8Rx&UAlw1}y2IbSfd#L+lFDs~xbY4y81BVHh02s3i;EvmRP+;An+mj2B5b<8_1?heU@ zV!LPaBQ&wJR#WME?C7jym&}ss#ZkmLg($FN&02e9<_<@Am4y(KmK* z9M6!9)l>pBx)qZf5r(>u$dnBg$k_;pF{_okLS~=!!;#yi=ewO!#JH%ZomBmLtp%J- zYtgAry9s8}a9)q)sUuL6lH7?7q6)?8W_n(a^QXE#uYW14Pa^oQ&i5JPs zxq*(gi>?qK%UESx`~~gd3yzY8j!Zb2ZkyD?qT{D5GnQQiW0Z8MwOn908hFRGm?ykiJb%)1_PxF_qf#nAc;> zLvi8D{8`0ID4j@RE%pclV<2L2$ie55%{#t%ZE!g*>%u@0nEE)mn;(uy5v;f@1B3+J zA`T?Tc0{s4qjahBYB{RH@;Hl5O1O(WR=pbUeVU8>#;1QW*!SY6pNGPxX_uXHGd(^$ zvs=?GY%>l^oELloZ`$-$2ys(%ORN+}av8#Mz;b__2>Vp^LML;&^Kbz?<;WOA0RhZb zWZxW2d;)R%8D@Qdf{p(8>DQgzdOG5y+QTv!XKxxERLQ)NAn68yN?{kR+~S*nOjP|Ra@bTV3kh@5=2DiIFQsv?AaTWMrW zkVYasDl0AEU;O*g&B5Yhzm|X zv{eImyjmrMcKrG`bbv8$-(AlSBt#t=$7h6m6@mQcroEL)E8MdI4=v2v_L`#9ql*I_l#B^(56k#-pL!27!6FTXPWCYBg%k#Z!c z*pdO+%xTlZo0aWHw5MhAL}E?c&0zp(cV>MX6^e=va*Mw0LdqZFc8Rye-WPo>&jBae zUA?%$BwDn2QpiIsD*!00M_vp=QIkiH84WEq9%1%CL;7{!L1w)HKPZiG$hT%MIZ=Su z36_e33^~!(CeGWZSWOAfM7v*&*Ty+S)yHRY2;~Po(l{L*Mu#mu z74Kb#C8n#(mjO6|xuoaT%8z1f9nlPgejpr8<#;|?D0{sQ`7%x9@%O@Bj?k1d`pZ}E z%bH$qPy4nu`DHuVj08(UmK`4R&sgFPU_^-JktF*879T;08k|SYnwpnn!7e^MI5s&* z&GShKb@ih;A`8Z+aOluTho9Hg@cL>s-DO)V)sI`f?w5g=n%bJ$o{(SfMg8-;huo&2 zq5O*f<<|Dw#f_mL>{@zyLlYBdPHm5w`f#BBFh0>IYV7=+hK=teDw5ON*n!OK{xSM1 z8>Qn`UQS|kU6|CcSzDY3d_xP9Wh{!0vtnh7X`R~9FHaQuN=-P0zL1v3pBC-p9u%$; zUdxh+(1UP_hygZdTG)Gvv|%oN;RVxF@|BJtO?x@aZXQX*T%zg5>2;|7dzL_ z`VBR@)zR7pvwOs3yjr0?@-aG(rL=&!Cf9rXT-y0EioNBGA)F@N*!^=f#*vCxA&D<5 z4X#vmnqof0yySMOy^|kdKRm#EGMokeqhHef!y&eA<<}HZFDFcS_EXUuPCm=m&T0#< zQcK&X98{gjnGj7;3w_%~{ZsEC?UuaxJz|OuXBX2wCW6yH&VqbO51p~m>CwK{{JMIB z$)kJGIu)hyX9h7STXtjcB*jXzO9=N^wOMwvOWM+)HLNJl~)B?%&h`ke8k+ytMB@<`os zcb)I+nUrR1r;>xkyk!y_O*RyVF*;+}C*QZgFXj6lGw94m)ly(1Vl@}RlZ218Stp;R zI$+^8WQww21(3Hk2~MKQ`Hs;N#ARr3MNVviN3`iK3Vb9x)-q=ifUunG=v$ES#ZF*1T3~BD2 z3(OHXS2kxhjJfsY8q3_o@sQhfFwqa+*}7_c*%gbGV>-N(_6J`F2_KE$b@Jvi*La2& z3*RmwwVYI%F>f)NE^fSrX~xb;vv8ZnG&IG5XDevbW@Q|bq{St_JmrvTG&GjOC!?1n z6}&}{j12s6TkG8?wIX=7FkHk@vh1OyVf{E#rCNm;R=nkhs zy!JPA8|7j;G#1aix$J)uTt6@w^hjgHU#RW{7GJ<{OH=h4$N0Wyh#-0h)ieon;czU;w45mIahT!)_`(rFpa;N@DUNM^V+?jw$PG7JWM+YkI#d4301J z?Ibl#d74KQq`;F82Tq)}um(4ohvAdEaSwXC3*V)5#N-#VEftEtq=xYq_PMy%Wwy3| z<2}kKw1QhWw!~|LXC(%EdZj3|(69(j)IW_WN$M%U&Dia&R;jRhdD zW%%Kz*xs3=euQzl^QtxnrAJtoN0{?s(UNVx(z5yAc_7Pa`Z_mRkA+y)2gLcf#|VbR zN%pvdmaazj5_~*wP?@%5{f+jPfDJ*1Gig5WkkW4AQ+WulDdU5a_F@Zl6hkxKr}ehV zgQOoHBqkYvO~MLw-4ppOdV{b=V(656I`%fvZ3kYN#sH zWHKSLMe3ZRG$B#%khsR8?va~CW&7k~N_T(4;#I0E&!>+pC-@W#GUGVlU*~aYie5Dq zukCh~ybOu)yPq#E)~y-GzC=$g@ag|L$p71UVq|{&X>Ju4p$40W8(YyiN*X~ejjwwu1h_(FHme+`$C#1#3 zY7my!NK~S|{*dAyBompDXg!ZSlzT-J6R02ah$7eXr7GHcBNI)3=b7*@xE%W4FL-4) zfwbRUVHQzSB0xrRP*(SAbT*Ay$BRHrC^;6IwWLCN+^WLhd~Sb59EwiEwowCI?IXhf zIHl}I;al%C+oFKNf~M)Ql+ij_x`l#qich5Y9_uZ7Ht@q^@i30eSzJuFLApdBA~>LZ zX329523q~3lz?N{r+19bO+vEg=vnX@;VGU$hQ#6tc-iTeXG)8)R7Pbwrp*h`u+kB?BE; zTq9Ka6)ws3`8n@o?%n%4vSftZN~wk+>2OE0BA`NZe==eoP%{T-{7jiek90>#es3V5 zwr+wKP*n!639XZWtwL0Zt1PBbk2fkakxLyORy3tOeKxQzBX@1DwJ@#`_K4MI{A*+8 zHswBVtL-Viq$>F)a45OFb0kzfzPK?|Naymrow>451WoL}O4M>Ey7)6E%^EJ8J6aub z%2SV%P#&H(nYoni+r~ci;ww{(_%7MuI@3_`QhxtZ!1~o&3=yigO@D%xA6tnR#hqH3>_tf2npc+W@r$C!1{QcQ9NQ(z zI1^tM`Az?*i@&gIIYyMVN>PXaUS}}u$=*K-Yz&;Zg7%D6VgBZuW;$sAn+gWYN1_=p zoAN359<&j3wGe!RLH@A_mrr}^IW@gb2HVTq;;Vn`pe zkXG&Ja{vq-onWVB>spJ@Gy|Qq`kt+|1jtw2wwEq>grqSzt@Wv`GW?MnB!YmNS8n+x-=&7SK}1Nxnn;x-i-=VL4i83P~D zl4>Lz@PXbK@;K2te*Unj(?f!~uS-)BjQ86Fz1%L?0N(0Q5b`+j#ATQiB_a%h!sWzI zgGu6Y6h3}&EnHub5?SuYk{7H-M&6g@Q7f2?g26-N;CStiyIq^GR0wW^`l8zI7}pOB zy1~8K3)>xT&#qy~`>nKwQ@1!xiL?E61Ru5)^*nl!zX-E3p5Ph}cB%eK^G~^-avc*C zmbsWk(sg7S^z^DLUu=!OR|%O0)Tw)9$jJIOonN96kt1~?M-F<3;xUV}8_rH%h7F5; z4k7GKt6ZZkfivKQ$q0foV4WGFI?x#avZCx(f+8||L?@HO=5wLXy}8g6eSi(P4NMFq zzY^pwmuuwaxpR0n>jqyIsB-JCGJbjqiQ<(d%JygO-38)w8gY-Ez)}uO7F@Um0$bRX z{m9IEVbd_#obAhGDq^E^7g5|Nas>qiv~E03#!S~2eaA@D8yjyK_9vSw&oDD{J;6{g z2y!2?ZRBHMBzUv!+^9{NzBdu#-sHA1@EVxS%o!5gXY?Nr;$PVImU}L)$A2AQ+Y;&7 ztcDY;rPyMjy*{17Y~C}xz-fEu^?2E>!Y--MkO;7eUAX*1VN|!0z<#OK9_@)2{ElJy zxh-0swz+vhUwYE&o&Uj%LJJ4}H%g|WawzMV5fyiKW8(%23m^>-b31`0^L@~x!>40kw293 zdTGG*Db!ObJ)N=Ue6RMVQR34cP6A#sJjHV6R!cSX&2PE53#W9c!ApF+*12nHyPpRN zdd$CeY@yFQ()bvbeNi~R;_1}+z%UC`P{UXS;cb6&kRI?hIMkjpZe~!6i=JZy?DNcF9`!*SG)Ea|x@^ zBdH}qRqI2*0as=@>vK9dm_Tdu-*PsrcoGJ{LCY#5bmzaY^D#(#qIp zA<%4R*4w~xA=632uRtwHd(z9+W9=2GY{MBxmipjCv0@cQHhWb-OX6G3ehB zJK489coNp%13zEqD6vs<>b*Jogt#Q**TE-Ds?q^#YipH#22vQ9kte!7sI%4$E}dX+ znFhXTRU{KI&2w2)g$v_o!#uS7knL5+d4PwUmo+i;y~`EJjU+5_RWQI?`hq-JopNxe zMjL?cUSrZ@alPX2%OeAmAK&+$rjhtk6B{3mz}3s@7gCM+K%Gs5S69hIsbC(82NpA? zy?)zYcOiU;msuEh4**$2j)6&!0=1({iL^N^S*PK>`%WS1e$ARhIyLJ*Ek&O8nj^!a z;v+I2rBPv=c%WanrN>sTyNfJE$(yTL*E@`M|CT>`xB4b$7!mf2C0k8i-y@Y|Y8(g+ zRgj|K9ue6-Yn+#;`)n-~uROag9MoFNKe|f&z@1jrQM3Aq5SU~i?|%<0HbLq%-Dh|` z=Kf>UCgTZbFj??juTReH*C#bqSdBW3k~DA->kFHL2Ic`7>~UmkgwM{ObL^z9D)nTuPXRZ_+<<_Egm2t6 zdEJ#K_+b&tUE5#@8pS)XPO!{$l7jctl+8fM(j9}Dz`u3uN7pseobN3$QZk$Cp% zpr?^PihvV9yCx6eqd}L=f8qf;qAaRabeI?{P(>P&93|SLmth&ITC6LjGQnISA!mzy zX~f=ui`xa&@vwaPyGPet+Cv#Pib)2Vr-Le=Y|)bTW*g`zs6}9QJ9|2Bubm%rOaYwW#yl?v01@V&xNCx( z<|6AITRm3^FVml}d{2s@MQfxYQaC4+hLPFP;m6R!u1VhfzuVtvGoRXzuN8!LJsV71 z-!fYLD%+anEhed6ho;n-H)FF}YAVeNe1zY?Lt}nVh_dgyO_iTMO;Qj_b zc5HY7+P85#Xs#P%@SXB|5r|_Nk`fd~+bEsMY%rc0q>hVciqURj9yZlsbq$IaM5ha&HytRFlexdOAy~i+?I7CZf z=uDKA&gTCfdKPy!k>uxnzH1dm2Gp*K%B4blSoqpRb=yruM_r=5CK*Vvz3B3OZZhB;!_5Yp_oz{Do&G)Ahmn!;=^o%)ps(v{@_p zy${I^84l)f0h2m|#W+!S27`4J zuTDr%bg^b`4{g}3HRSDOC}TnpQ0T81!=Hi&?Qg8w2xSwGlywlmlwn-tqs_UG0v#@f z;@3@g&|8_R4dX>e7lnl=F?SBDq46su{5lDa|w=-qj<07RnShJK( zIb@rCl;+XcdXwAx17sciQ&)%s4~I@XgP}tFYO!M-$g^5%^y(v~zEq-+nW#W49+^Lr zwtQ#lq<}M!+`1+ekt$};Th=o#3;=c(3Fxw=74~sEGDP&;GEe_FjEY2Wpo?D*N9FJH zt&un9I0cj^F|>Fm2CI$-IU+l*k#}oKLm8#%y=HL;jEXZd0_1^P7XCbQ3{xgPn|_0E z1&o=8G%OTY4eEB5P;wv_J*nHZ|9OjUlnV{=C3zTxw#=(?*d$gcC=oE32Y>;(JD24a z(@$go@(?;aiRw2W)u)T72{(I;GVoZzRwgHev{O|ED@9LKbZUecC z<%!aimH_Memj0C8OUW1Ce}lW7s~|Ba|T zP?H|v-J@$LNrZ_21%>o6f?;=R|35)Bf*evcFulC(0Q&z8u_)t{o^n+$GQgY&_`m5w z(h4^urt9rvYLWgMQGLMe00Fdn_8&`~nC-G&@R*5W|1m^q`ajkibuf*Tr!UUP|0nKa z%!{5;867FJ{|E^H)};oR|2L!#9KhQKuq^`j|Kv4A3upjY&!AD2xPnX(N8lhU4vUc( z;Yevp44AO_zx9GaAmjA#_IM$1d+dRS%uECf6+r^%sQZLT{f|DReS3sKJ<2c6HR=;{R*k905^y=YMO2!3wo2*BQ3~Q4S?eamc)o zK8C1(HXyqIvLsNr(gRcZ`@aJj#;E@Pu@)gf4VF^-Ke0mEx=@;eAe>hGlMo!LNGGP# z=@rGU2E%{NwFl1s2>~*L006(#0JXVR^%NoH|1D043x%nj-R*Ehdi+1RK{{t8yHqd5 zMdb3$5~KYe1q$KAZlntR0iRJHkvqCsLb;m`>0{DVIhqPNpmFidP^0h|bpi4q z(*@#9WXR;jq|WAWaOx5mjlrryQfhl-;bp{t5%v9JnNH7t8jPe09tXE`s1ibR5P*RQ z6Ji5Rr8K*rAo9h)6vLDsLbRteVOXv~41?BrhW|d5eDzPuF_2T#jUC}$Q=_I&<+({0`;{OyigrL}gerDC$o^mIgyI<4`0TG|EK}JX3rz6EBDucD&7)s>a z1t~u%1ct<_4?yPccBndlSTv3 z?+aB-BWTFH?8HdX3@cUPdhX`|!w2$N(eN2{5tM9h-p0z!SP-awbM+ld=gNT( z{95>XZ1T%2JX%I~M84rl>TwT}IfTe*ks=@cmqvG`A!P>i5sLNl3Jj?5F35*|B339N zLogEIWQ_w~Mw)31DWONOBr-gyruV55Kk59%Or!@GS5O78QJ6x32VC|`4VHqlnCoot zUEpDqbgJj~@YJZj?Ae5OC0{v#cI(mpEzydeBiHaSF^W|Aqty+6#=M1>o( z{Z+;X1X+qK#Q4-!>4KsdtQ6{CT}o*0HjFxluQ3*Tglr6ZW}i4KNdeNMn^mA5;6LV( ze;A*rb-W<)Z%X|3u>46uX(@bDc!|FC`~HEU{zcMwB!ioLgt?Ls;4UvYuu@H0Kppfq z)2bL33Tl$%ty*92RqU=jk&XM{%>aTR=LSGyp?#^iEc)Qfk|HNwX)L#kLpt zZ%2TM?myf0>)!}@`ASS4ogAQa{)+ugSRR$%&{3!O&IO8`U#RYF?fQ-c)+ z_%5|+jluoFl~)DsXF}IQQHi8I_g*;!KfXbEYPG`vjDLAbKjyKXeTIetYXDRFs6qaw zMu9UpD1QXsEr$)QiycXwGaekx(|bNZO(+d$=aTx)xc=Lxic(206|LYjNJ{8!yYghG z{kW31F*VJFpxiQ|tB}a1{Hnx215g*TvO?=DK>13eV0&wB;i zWp-Obx-zane3)98LXQMpCD1lHt08F*q(JW&+>bDsEGGVy5(g5$`1STH{PLFj2Mich z-ZA^e<086^BO{<~cplj?Phx~q9u~9hN-da;!IUiNwOK)|8J*_a<%B{WlZoflE8k^d zJehaJ%XcTbjYYjVp%ToE4G{&HXy3R8^WmlY(*b`*8Z!QtM9Ak3|JL{WWOyrQT+3vp z_5!WafJqJ7J&&4fyOjM1xl!*MjCprstKLOaC%)Qv9ujB1(<{IKU`$Ubl`hDjaeguW z67XzVb2bA9hk9x>;Gj2PZ7c<$=UH#8Z6QP#@z!Gu&^Xex%&wyPY22?TH6(G@?s-ir z#Aam}hlOz3ui(Z(Z9)A11qBEC`2V?*@QZ#VjIBBWs4G-4mz;gswVn5ooV)8;-1Vs% zM8IQ@mmy%6eyi=Kn}(?K%ugpHUDz!NG(;r4?M+@eJ^K>6qXDQh`LYRm!R=0eh+MsU zuT|BTFJHhtC(DQk1ikm0_on~ws(0QnC{{oF3Tuvfl@tK=Dq-D&B#eZmj)qUGNH`Kf zBn-fEN3V8BMEmWN>37M0c87_oKeSqY7j(3Omc3;3r(b;hw*S*|@WIm~H%Px5-EDe8el;lBg$aAUh2J%cl{oq-b&19-Vlf zOpiZObJ5rO4L*K+;xnIIBmZP$f}4F-0jO^YOS3NCNO;4gjFB<=e{m zx$$)7D)rNM!<@GfUZ1dK(XEGj@BZ_P`HjEZIyr1HbVq~6kg(XSB)sW>`5qDmVCjfg z-&R<3cxY(kQfWIcggt4Jhy8)~KlJ^l-jy1->@U{IXE!Ay-b_&d>RrN{_A3eR_aC~e4|pxEwR{Nolg&5MU3|^##CXx7$5xRXPi^lxACj)SZCkyZtJ`^E{x?GpzbAn zfLs*~BN7H+QSD1>63PCJlzH3nlP~|L#S8)$?VYc9CX=S|vmZTt&DzJ_Y_mJ>?6%jx zltDBM>RG}LiH-YB1z?>zs@FT9;R!&9!C#Vdxii)A@H4NTeILb7rx{yZ=Pd3(EaP7~&4T@o{)4!+zpc%}7Hx~6 zasqW432(SmN%(R|7=S99H)dJ1KiM|=j#%sGKXdTVng?Gz_9sj6FI|#@t~Z|Pc$~QV z_*Y~9a`P?gcl^iaW%A}n`<=*7#v~27xV%7}M8ZbGd%Zn{Pd-Ctt22xV12Df(QWdF- zlFhWQt+jV^GWq|`9e(_OeyxA?*w@d$;q7Ix1lzbvde+nJW80pj7o8^`S^3)jPqwuG zhP`g(?LQlJH$62f>DdWMz1VIT)PscA@9`vj;u+I=C#;bNV734^TYU7lYf~xO6>qmM zX=?pxi+$lQ4{X`<=zD+oki$|thz5sgDQBTu7SoR3F}UG58j{bwa2fsaueI^(ZfTeA z92N8{$J4a!`Ef}vOiDVLmej3;JQhUArp1G9J+NFPyxaVwcTxfX6h^pQZ!<(Oi&K}C zp!KOH+S1ZIxh4{SZevsG{ zQi)TYvE+}Ek=XYJhI)VUkt07lP3re_+FeRBn#%vovVtg)SFoFre%+6L_+#|p5B(uc zPEC@uGbE!@9pEVSo65a$PE6ZBf73tWM?zCF+a&`}?}O zx>mQShb8p6m8iAAa;~IIIb^b@dnDf1GkP(Gj6wYfnCV?hrj$H}g_Ql=VgpJeCo@jw zxOCiCPWB!OV(Uq6qM5E&a^yp*v z(E9b;>Bbv=ozlY-Cb-lhS~}k00000)d90T000002p9kW0Dyo2000000RsR401z+$0000WU;qFB z00IU8000043;+NCK)?V1004l10RRAi#o_-8FaQ&hGG-JlXVU-x002ovPDHLkV1g07 B@>u`? literal 0 HcmV?d00001 From 487c24c4661bf863f9c02f0ba8fcc31626a9ea40 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 19:40:25 +0200 Subject: [PATCH 02/34] added localized string --- plugin.video.alfa/channelselector.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/plugin.video.alfa/channelselector.py b/plugin.video.alfa/channelselector.py index ceb35d29..db7306a3 100644 --- a/plugin.video.alfa/channelselector.py +++ b/plugin.video.alfa/channelselector.py @@ -24,9 +24,9 @@ def getmainlist(view="thumb_"): thumbnail=get_thumb("channels.png", view), view=view, category=config.get_localized_string(30119), viewmode="thumbnails")) - itemlist.append(Item(title='Mis enlaces', channel="alfavorites", action="mainlist", - thumbnail=get_thumb("favorites.png", view), view=view, - category='Mis enlaces', viewmode="thumbnails")) + itemlist.append(Item(title=config.get_localized_string(70527), channel="alfavorites", action="mainlist", + thumbnail=get_thumb("mylink.png", view), view=view, + category=config.get_localized_string(70527), viewmode="thumbnails")) itemlist.append(Item(title=config.get_localized_string(30103), channel="search", action="mainlist", thumbnail=get_thumb("search.png", view), @@ -267,4 +267,4 @@ def set_channel_info(parameters): content = config.get_localized_category(cat) info = '[COLOR yellow]Tipo de contenido:[/COLOR] %s\n\n[COLOR yellow]Idiomas:[/COLOR] %s' % (content, language) - return info \ No newline at end of file + return info From cb36622cbae906f744bb136e960f3ae38b269cc2 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 19:42:42 +0200 Subject: [PATCH 03/34] Update strings.po --- .../resources/language/Spanish/strings.po | 144 +++++++++++++++++- 1 file changed, 140 insertions(+), 4 deletions(-) diff --git a/plugin.video.alfa/resources/language/Spanish/strings.po b/plugin.video.alfa/resources/language/Spanish/strings.po index 39798fd8..9d7814b1 100644 --- a/plugin.video.alfa/resources/language/Spanish/strings.po +++ b/plugin.video.alfa/resources/language/Spanish/strings.po @@ -4792,13 +4792,149 @@ msgid "Verification of counters of videos seen / not seen (uncheck to verify)" msgstr "Verificación de los contadores de vídeos vistos/no vistos (desmarcar para verificar)" msgctxt "#70527" +msgid "My links" +msgstr 'Mis enlaces' + +msgctxt "#70528" +msgid "Default folder" +msgstr "Carpeta por defecto" + +msgctxt "#70529" +msgid "Repeated link" +msgstr "Enlace repetido" + +msgctxt "#70530" +msgid "You already have this link in the folder" +msgstr "Ya tienes este enlace en la carpeta" + +msgctxt "#70531" +msgid "Saved link" +msgstr "Guardado enlace" + +msgctxt "#70532" +msgid "Folder: %s" +msgstr "Carpeta: %s" + +msgctxt "#70533" +msgid "Rename folder" +msgstr "Cambiar nombre de la carpeta" + +msgctxt "#70534" +msgid "Delete folder" +msgstr "Eliminar la carpeta" + +msgctxt "#70535" +msgid "Move up all" +msgstr "Mover arriba del todo" + +msgctxt "#70536" +msgid "Move up" +msgstr "Mover hacia arriba" + +msgctxt "#70537" +msgid "Move down" +msgstr "Mover hacia abajo" + +msgctxt "#70538" +msgid "Move down all" +msgstr "Mover abajo del todo" + +msgctxt "#70539" +msgid "* Create different folders to store your favorite links within Icarus. [CR]" +msgstr "* Crea diferentes carpetas para guardar tus enlaces favoritos dentro de Icarus.[CR]]" + +msgctxt "#70540" +msgid "* To add links to folders, access the context menu from any point in Icarus.[CR]" +msgstr "* Para añadir enlaces a las carpetas accede al menú contextual desde cualquier punto de Icarus.[CR]" + +msgctxt "#70541" +msgid "* The links can be channels, sections within the channels, searches, and even movies and series although for the latter it is preferable to use the video library." +msgstr "* Los enlaces pueden ser canales, secciones dentro de los canales, búsquedas, e incluso películas y series aunque para esto último es preferible utilizar la videoteca." + +msgctxt "#70542" +msgid "Create new folder ..." +msgstr "Crear nueva carpeta ..." + +msgctxt "#70543" +msgid "Move to another folder" +msgstr "Mover a otra carpeta" + +msgctxt "#70544" +msgid "Change title" +msgstr "Cambiar título" + +msgctxt "#70545" +msgid "Change color" +msgstr "Cambiar color" + +msgctxt "#70546" +msgid "Save link in:" +msgstr "Guardar enlace en:" + +msgctxt "#70547" +msgid "Change thumbnail" +msgstr "Cambiar thumbnail" + +msgctxt "#70548" +msgid "Delete link" +msgstr "Eliminar enlace" + +msgctxt "#70549" +msgid "Select folder" +msgstr "Seleccionar carpeta" + +msgctxt "#70550" +msgid "Create new folder" +msgstr "Crear nueva carpeta" + +msgctxt "#70551" +msgid "Folder name" +msgstr "Nombre de la carpeta" + +msgctxt "#70552" +msgid "Delete the folder and links it contains?" +msgstr "¿Borrar la carpeta y los enlaces que contiene?" + +msgctxt "#70553" +msgid "Change link title" +msgstr "Cambiar título del enlace" + +msgctxt "#70554" +msgid "Select thumbnail:" +msgstr "Seleccionar thumbnail:" + +msgctxt "#70555" +msgid "Move link to:" +msgstr "Mover enlace a:" + +msgctxt "#70556" +msgid "%d links in folder" +msgstr "%d enlaces en la carpeta" + +msgctxt "#70557" +msgid "Save link" +msgstr "Guardar enlace" + +msgctxt "#70558" +msgid "Select color:" +msgstr "Seleccionar color:" + +msgctxt "#70559" msgid "Now in Theatres " msgstr "Ahora en cines" -msgctxt "#70528" +msgctxt "#70560" msgid "Movies by Genre" msgstr "Por generos" -msgctxt "#70529" -msgid "tv show" -msgstr "serie" +msgctxt "#70561" +msgid "Search Similar +msgstr "Buscar Similares" + + + + + + + + From 44d70f9dd2d5178ed26414c089b348412ad87c4e Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 19:43:30 +0200 Subject: [PATCH 04/34] Update strings.po --- .../language/Spanish (Mexico)/strings.po | 145 +++++++++++++++++- 1 file changed, 141 insertions(+), 4 deletions(-) diff --git a/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po b/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po index 39798fd8..cfe95a5c 100644 --- a/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po +++ b/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po @@ -4792,13 +4792,150 @@ msgid "Verification of counters of videos seen / not seen (uncheck to verify)" msgstr "Verificación de los contadores de vídeos vistos/no vistos (desmarcar para verificar)" msgctxt "#70527" +msgid "My links" +msgstr 'Mis enlaces' + +msgctxt "#70528" +msgid "Default folder" +msgstr "Carpeta por defecto" + +msgctxt "#70529" +msgid "Repeated link" +msgstr "Enlace repetido" + +msgctxt "#70530" +msgid "You already have this link in the folder" +msgstr "Ya tienes este enlace en la carpeta" + +msgctxt "#70531" +msgid "Saved link" +msgstr "Guardado enlace" + +msgctxt "#70532" +msgid "Folder: %s" +msgstr "Carpeta: %s" + +msgctxt "#70533" +msgid "Rename folder" +msgstr "Cambiar nombre de la carpeta" + +msgctxt "#70534" +msgid "Delete folder" +msgstr "Eliminar la carpeta" + +msgctxt "#70535" +msgid "Move up all" +msgstr "Mover arriba del todo" + +msgctxt "#70536" +msgid "Move up" +msgstr "Mover hacia arriba" + +msgctxt "#70537" +msgid "Move down" +msgstr "Mover hacia abajo" + +msgctxt "#70538" +msgid "Move down all" +msgstr "Mover abajo del todo" + +msgctxt "#70539" +msgid "* Create different folders to store your favorite links within Icarus. [CR]" +msgstr "* Crea diferentes carpetas para guardar tus enlaces favoritos dentro de Icarus.[CR]]" + +msgctxt "#70540" +msgid "* To add links to folders, access the context menu from any point in Icarus.[CR]" +msgstr "* Para añadir enlaces a las carpetas accede al menú contextual desde cualquier punto de Icarus.[CR]" + +msgctxt "#70541" +msgid "* The links can be channels, sections within the channels, searches, and even movies and series although for the latter it is preferable to use the video library." +msgstr "* Los enlaces pueden ser canales, secciones dentro de los canales, búsquedas, e incluso películas y series aunque para esto último es preferible utilizar la videoteca." + +msgctxt "#70542" +msgid "Create new folder ..." +msgstr "Crear nueva carpeta ..." + +msgctxt "#70543" +msgid "Move to another folder" +msgstr "Mover a otra carpeta" + +msgctxt "#70544" +msgid "Change title" +msgstr "Cambiar título" + +msgctxt "#70545" +msgid "Change color" +msgstr "Cambiar color" + +msgctxt "#70546" +msgid "Save link in:" +msgstr "Guardar enlace en:" + +msgctxt "#70547" +msgid "Change thumbnail" +msgstr "Cambiar thumbnail" + +msgctxt "#70548" +msgid "Delete link" +msgstr "Eliminar enlace" + +msgctxt "#70549" +msgid "Select folder" +msgstr "Seleccionar carpeta" + +msgctxt "#70550" +msgid "Create new folder" +msgstr "Crear nueva carpeta" + +msgctxt "#70551" +msgid "Folder name" +msgstr "Nombre de la carpeta" + +msgctxt "#70552" +msgid "Delete the folder and links it contains?" +msgstr "¿Borrar la carpeta y los enlaces que contiene?" + +msgctxt "#70553" +msgid "Change link title" +msgstr "Cambiar título del enlace" + +msgctxt "#70554" +msgid "Select thumbnail:" +msgstr "Seleccionar thumbnail:" + +msgctxt "#70555" +msgid "Move link to:" +msgstr "Mover enlace a:" + +msgctxt "#70556" +msgid "%d links in folder" +msgstr "%d enlaces en la carpeta" + +msgctxt "#70557" +msgid "Save link" +msgstr "Guardar enlace" + +msgctxt "#70558" +msgid "Select color:" +msgstr "Seleccionar color:" + +msgctxt "#70559" msgid "Now in Theatres " msgstr "Ahora en cines" -msgctxt "#70528" +msgctxt "#70560" msgid "Movies by Genre" msgstr "Por generos" -msgctxt "#70529" -msgid "tv show" -msgstr "serie" +msgctxt "#70561" +msgid "Search Similar +msgstr "Buscar Similares" + + + + + + + + + From 1f02b33ce7929941206cdae3994c1b805fc02baf Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 19:44:46 +0200 Subject: [PATCH 05/34] Update strings.po --- .../language/Spanish (Argentina)/strings.po | 145 +++++++++++++++++- 1 file changed, 141 insertions(+), 4 deletions(-) diff --git a/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po b/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po index 39798fd8..cfe95a5c 100644 --- a/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po +++ b/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po @@ -4792,13 +4792,150 @@ msgid "Verification of counters of videos seen / not seen (uncheck to verify)" msgstr "Verificación de los contadores de vídeos vistos/no vistos (desmarcar para verificar)" msgctxt "#70527" +msgid "My links" +msgstr 'Mis enlaces' + +msgctxt "#70528" +msgid "Default folder" +msgstr "Carpeta por defecto" + +msgctxt "#70529" +msgid "Repeated link" +msgstr "Enlace repetido" + +msgctxt "#70530" +msgid "You already have this link in the folder" +msgstr "Ya tienes este enlace en la carpeta" + +msgctxt "#70531" +msgid "Saved link" +msgstr "Guardado enlace" + +msgctxt "#70532" +msgid "Folder: %s" +msgstr "Carpeta: %s" + +msgctxt "#70533" +msgid "Rename folder" +msgstr "Cambiar nombre de la carpeta" + +msgctxt "#70534" +msgid "Delete folder" +msgstr "Eliminar la carpeta" + +msgctxt "#70535" +msgid "Move up all" +msgstr "Mover arriba del todo" + +msgctxt "#70536" +msgid "Move up" +msgstr "Mover hacia arriba" + +msgctxt "#70537" +msgid "Move down" +msgstr "Mover hacia abajo" + +msgctxt "#70538" +msgid "Move down all" +msgstr "Mover abajo del todo" + +msgctxt "#70539" +msgid "* Create different folders to store your favorite links within Icarus. [CR]" +msgstr "* Crea diferentes carpetas para guardar tus enlaces favoritos dentro de Icarus.[CR]]" + +msgctxt "#70540" +msgid "* To add links to folders, access the context menu from any point in Icarus.[CR]" +msgstr "* Para añadir enlaces a las carpetas accede al menú contextual desde cualquier punto de Icarus.[CR]" + +msgctxt "#70541" +msgid "* The links can be channels, sections within the channels, searches, and even movies and series although for the latter it is preferable to use the video library." +msgstr "* Los enlaces pueden ser canales, secciones dentro de los canales, búsquedas, e incluso películas y series aunque para esto último es preferible utilizar la videoteca." + +msgctxt "#70542" +msgid "Create new folder ..." +msgstr "Crear nueva carpeta ..." + +msgctxt "#70543" +msgid "Move to another folder" +msgstr "Mover a otra carpeta" + +msgctxt "#70544" +msgid "Change title" +msgstr "Cambiar título" + +msgctxt "#70545" +msgid "Change color" +msgstr "Cambiar color" + +msgctxt "#70546" +msgid "Save link in:" +msgstr "Guardar enlace en:" + +msgctxt "#70547" +msgid "Change thumbnail" +msgstr "Cambiar thumbnail" + +msgctxt "#70548" +msgid "Delete link" +msgstr "Eliminar enlace" + +msgctxt "#70549" +msgid "Select folder" +msgstr "Seleccionar carpeta" + +msgctxt "#70550" +msgid "Create new folder" +msgstr "Crear nueva carpeta" + +msgctxt "#70551" +msgid "Folder name" +msgstr "Nombre de la carpeta" + +msgctxt "#70552" +msgid "Delete the folder and links it contains?" +msgstr "¿Borrar la carpeta y los enlaces que contiene?" + +msgctxt "#70553" +msgid "Change link title" +msgstr "Cambiar título del enlace" + +msgctxt "#70554" +msgid "Select thumbnail:" +msgstr "Seleccionar thumbnail:" + +msgctxt "#70555" +msgid "Move link to:" +msgstr "Mover enlace a:" + +msgctxt "#70556" +msgid "%d links in folder" +msgstr "%d enlaces en la carpeta" + +msgctxt "#70557" +msgid "Save link" +msgstr "Guardar enlace" + +msgctxt "#70558" +msgid "Select color:" +msgstr "Seleccionar color:" + +msgctxt "#70559" msgid "Now in Theatres " msgstr "Ahora en cines" -msgctxt "#70528" +msgctxt "#70560" msgid "Movies by Genre" msgstr "Por generos" -msgctxt "#70529" -msgid "tv show" -msgstr "serie" +msgctxt "#70561" +msgid "Search Similar +msgstr "Buscar Similares" + + + + + + + + + From 3181eef6a4a98a88f02522e7a8973649d115c923 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 19:47:11 +0200 Subject: [PATCH 06/34] Update strings.po --- .../resources/language/Italian/strings.po | 137 +++++++++++++++++- 1 file changed, 133 insertions(+), 4 deletions(-) diff --git a/plugin.video.alfa/resources/language/Italian/strings.po b/plugin.video.alfa/resources/language/Italian/strings.po index 9fa6c505..21484f69 100644 --- a/plugin.video.alfa/resources/language/Italian/strings.po +++ b/plugin.video.alfa/resources/language/Italian/strings.po @@ -4792,14 +4792,143 @@ msgid "Verification of counters of videos seen / not seen (uncheck to verify)" msgstr "Verifica dei contatori di video visti/non visti (deselezionare per verificare)" msgctxt "#70527" +msgid "My links" +msgstr "I Miei Link" + +msgctxt "#70528" +msgid "Default folder" +msgstr "Cartella di Default" + +msgctxt "#70529" +msgid "Repeated link" +msgstr "Link ripetuto" + +msgctxt "#70530" +msgid "You already have this link in the folder" +msgstr "C'è già un link nella cartella" + +msgctxt "#70531" +msgid "Saved link" +msgstr "Link salvato" + +msgctxt "#70532" +msgid "Folder: %s" +msgstr "Cartella: %s" + +msgctxt "#70533" +msgid "Rename folder" +msgstr "Cambia nome alla cartella" + +msgctxt "#70534" +msgid "Delete folder" +msgstr "Elimina la cartella" + +msgctxt "#70535" +msgid "Move up all" +msgstr "Sposta tutto in alto" + +msgctxt "#70536" +msgid "Move up" +msgstr "Sposta in su" + +msgctxt "#70537" +msgid "Move down" +msgstr "Sposta in giù" + +msgctxt "#70538" +msgid "Move down all" +msgstr "Sposta tutto in basso" + +msgctxt "#70539" +msgid "* Create different folders to store your favorite links within Icarus. [CR]" +msgstr "* Crea diverse cartelle per memorizzare i tuoi collegamenti preferiti all'interno di Icarus." + +msgctxt "#70540" +msgid "* To add links to folders, access the context menu from any point in Icarus.[CR]" +msgstr "* Per aggiungere collegamenti alle cartelle accedi al menu contestuale da qualsiasi punto di Icarus." + +msgctxt "#70541" +msgid "* The links can be channels, sections within the channels, searches, and even movies and series although for the latter it is preferable to use the video library." +msgstr "* I collegamenti possono essere canali, sezioni all'interno dei canali, ricerche e persino film e serie, sebbene per quest'ultimo sia preferibile utilizzare la videoteca." + +msgctxt "#70542" +msgid "Create new folder ..." +msgstr "Crea nuova cartella ..." + +msgctxt "#70543" +msgid "Move to another folder" +msgstr "Sposta in altra cartella" + +msgctxt "#70544" +msgid "Change title" +msgstr "Cambia titolo" + +msgctxt "#70545" +msgid "Change color" +msgstr "Cambia colore" + +msgctxt "#70546" +msgid "Save link in:" +msgstr "Salva link in:" + +msgctxt "#70547" +msgid "Change thumbnail" +msgstr "Cambia thumbnail" + +msgctxt "#70548" +msgid "Delete link" +msgstr "Elimina link" + +msgctxt "#70549" +msgid "Select folder" +msgstr "Seleziona cartella" + +msgctxt "#70550" +msgid "Create new folder" +msgstr "Crea nuova cartella" + +msgctxt "#70551" +msgid "Folder name" +msgstr "Nome della cartella" + +msgctxt "#70552" +msgid "Delete the folder and links it contains?" +msgstr "Eliminare la cartella con tutti i link?" + +msgctxt "#70553" +msgid "Change link title" +msgstr "Cambia titolo del link" + +msgctxt "#70554" +msgid "Select thumbnail:" +msgstr "Seleziona thumbnail:" + +msgctxt "#70555" +msgid "Move link to:" +msgstr "Sposta link in:" + +msgctxt "#70556" +msgid "%d links in folder" +msgstr "%d link nella cartella" + +msgctxt "#70557" +msgid "Save link" +msgstr "Salva link" + +msgctxt "#70558" +msgid "Select color:" +msgstr "Seleziona colore:" + +msgctxt "#70559" msgid "Now in Theatres " msgstr "Oggi in Sala" -msgctxt "#70528" +msgctxt "#70560" msgid "Movies by Genre" msgstr "Per genere" -msgctxt "#70529" -msgid "tv show" -msgstr "serie" +msgctxt "#70561" +msgid "Search Similar +msgstr "Cerca Simili" + From 3c98be46bf36f9daa1c606f186d383fe21a46194 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 19:48:23 +0200 Subject: [PATCH 07/34] Update strings.po --- .../resources/language/English/strings.po | 134 +++++++++++++++++- 1 file changed, 131 insertions(+), 3 deletions(-) diff --git a/plugin.video.alfa/resources/language/English/strings.po b/plugin.video.alfa/resources/language/English/strings.po index ee9d9859..8cac0538 100644 --- a/plugin.video.alfa/resources/language/English/strings.po +++ b/plugin.video.alfa/resources/language/English/strings.po @@ -4804,14 +4804,142 @@ msgid "Verification of counters of videos seen / not seen (uncheck to verify)" msgstr "" msgctxt "#70527" -msgid "Now in Theatres " +msgid "My links" msgstr "" msgctxt "#70528" -msgid "Movies by Genre" +msgid "Default folder" msgstr "" msgctxt "#70529" -msgid "tv show" +msgid "Repeated link" +msgstr "" + +msgctxt "#70530" +msgid "You already have this link in the folder" +msgstr "" + +msgctxt "#70531" +msgid "Saved link" +msgstr "" + +msgctxt "#70532" +msgid "Folder: %s" +msgstr "" + +msgctxt "#70533" +msgid "Rename folder" +msgstr "" + +msgctxt "#70534" +msgid "Delete folder" +msgstr "" + +msgctxt "#70535" +msgid "Move up all" +msgstr "" + +msgctxt "#70536" +msgid "Move up" +msgstr "" + +msgctxt "#70537" +msgid "Move down" +msgstr "" + +msgctxt "#70538" +msgid "Move down all" +msgstr "" + +msgctxt "#70539" +msgid "* Create different folders to store your favorite links within Icarus. [CR]" +msgstr "" + +msgctxt "#70540" +msgid "* To add links to folders, access the context menu from any point in Icarus.[CR]" +msgstr "" + +msgctxt "#70541" +msgid "* The links can be channels, sections within the channels, searches, and even movies and series although for the latter it is preferable to use the video library." +msgstr "" + +msgctxt "#70542" +msgid "Create new folder ..." +msgstr "Creaa nuova cartella ..." + +msgctxt "#70543" +msgid "Move to another folder" +msgstr "" + +msgctxt "#70544" +msgid "Change title" +msgstr "" + +msgctxt "#70545" +msgid "Change color" +msgstr "" + +msgctxt "#70546" +msgid "Save link in:" +msgstr "" + +msgctxt "#70547" +msgid "Change thumbnail" +msgstr "" + +msgctxt "#70548" +msgid "Delete link" +msgstr "" + +msgctxt "#70549" +msgid "Select folder" +msgstr "" + +msgctxt "#70550" +msgid "Create new folder" +msgstr "" + +msgctxt "#70551" +msgid "Folder name" +msgstr "" + +msgctxt "#70552" +msgid "Delete the folder and links it contains?" +msgstr "" + +msgctxt "#70553" +msgid "Change link title" +msgstr "" + +msgctxt "#70554" +msgid "Select thumbnail:" +msgstr "" + +msgctxt "#70555" +msgid "Move link to:" +msgstr "" + +msgctxt "#70556" +msgid "%d links in folder" +msgstr "" + +msgctxt "#70557" +msgid "Save link" +msgstr "" + +msgctxt "#70558" +msgid "Select color:" +msgstr "" + +msgctxt "#70559" +msgid "Now in Theatres " +msgstr "" + +msgctxt "#70560" +msgid "Movies by Genre" +msgstr " + +msgctxt "#70561" +msgid "Search Similar msgstr "" From 839876993c19ba19b0502aab9f6a4314cb3fda60 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 19:53:38 +0200 Subject: [PATCH 08/34] update localized strings --- plugin.video.alfa/channelselector.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin.video.alfa/channelselector.py b/plugin.video.alfa/channelselector.py index db7306a3..da98f9ee 100644 --- a/plugin.video.alfa/channelselector.py +++ b/plugin.video.alfa/channelselector.py @@ -197,7 +197,7 @@ def filterchannels(category, view="thumb_"): thumbnail=channel_parameters["thumbnail"], type="generic", viewmode="list")) if category in ['movie', 'tvshow']: - titles = [config.get_localized_string(70028), config.get_localized_string(30985), config.get_localized_string(70527), config.get_localized_string(60264), config.get_localized_string(70528)] + titles = [config.get_localized_string(70028), config.get_localized_string(30985), config.get_localized_string(70559), config.get_localized_string(60264), config.get_localized_string(70560)] ids = ['popular', 'top_rated', 'now_playing', 'on_the_air'] for x in range(0,3): if x == 2 and category != 'movie': From b8b1b5317c81597ff8698a4b7df8ba5a80324de2 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 21:49:47 +0200 Subject: [PATCH 09/34] Update strings.po --- plugin.video.alfa/resources/language/Spanish/strings.po | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin.video.alfa/resources/language/Spanish/strings.po b/plugin.video.alfa/resources/language/Spanish/strings.po index 9d7814b1..ab09763a 100644 --- a/plugin.video.alfa/resources/language/Spanish/strings.po +++ b/plugin.video.alfa/resources/language/Spanish/strings.po @@ -1734,7 +1734,7 @@ msgid "[COLOR %s]Filter configuration for TV series...[/COLOR]" msgstr "[COLOR %s]Configurar filtro para series...[/COLOR]" msgctxt "#60430" -msgid "FILTRO: Delete '%s'" +msgid "FILTER: Delete '%s'" msgstr "FILTRO: Borrar '%s'" msgctxt "#60431" From 995cd26bfaae6983d206668fb2bc81fdef6e754c Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 21:51:41 +0200 Subject: [PATCH 10/34] Update strings.po --- .../resources/language/Spanish (Mexico)/strings.po | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po b/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po index cfe95a5c..677d30a4 100644 --- a/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po +++ b/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po @@ -1734,7 +1734,7 @@ msgid "[COLOR %s]Filter configuration for TV series...[/COLOR]" msgstr "[COLOR %s]Configurar filtro para series...[/COLOR]" msgctxt "#60430" -msgid "FILTRO: Delete '%s'" +msgid "FILTER: Delete '%s'" msgstr "FILTRO: Borrar '%s'" msgctxt "#60431" From 14a5125910e451378fcc37bcedb9143b9a507ae6 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 21:52:47 +0200 Subject: [PATCH 11/34] Update strings.po --- .../resources/language/Spanish (Argentina)/strings.po | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po b/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po index cfe95a5c..677d30a4 100644 --- a/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po +++ b/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po @@ -1734,7 +1734,7 @@ msgid "[COLOR %s]Filter configuration for TV series...[/COLOR]" msgstr "[COLOR %s]Configurar filtro para series...[/COLOR]" msgctxt "#60430" -msgid "FILTRO: Delete '%s'" +msgid "FILTER: Delete '%s'" msgstr "FILTRO: Borrar '%s'" msgctxt "#60431" From e65b4b0cee46175b8e7bacd6eb0ee0e19f93de53 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Thu, 6 Sep 2018 21:55:09 +0200 Subject: [PATCH 12/34] Update strings.po --- plugin.video.alfa/resources/language/English/strings.po | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin.video.alfa/resources/language/English/strings.po b/plugin.video.alfa/resources/language/English/strings.po index 8cac0538..35faa2f0 100644 --- a/plugin.video.alfa/resources/language/English/strings.po +++ b/plugin.video.alfa/resources/language/English/strings.po @@ -1734,7 +1734,7 @@ msgid "[COLOR %s]Filter configuration for TV series...[/COLOR]" msgstr "" msgctxt "#60430" -msgid "FILTRO: Delete '%s'" +msgid "FILTER: Delete '%s'" msgstr "" msgctxt "#60431" From 306dc80a5ce5d60f3d0b7f0691b83549b43a6011 Mon Sep 17 00:00:00 2001 From: angedam <37449358+thedoctor66@users.noreply.github.com> Date: Fri, 7 Sep 2018 16:35:08 +0200 Subject: [PATCH 13/34] Added localized strings --- plugin.video.alfa/platformcode/platformtools.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/plugin.video.alfa/platformcode/platformtools.py b/plugin.video.alfa/platformcode/platformtools.py index f0713eaa..f566eed6 100644 --- a/plugin.video.alfa/platformcode/platformtools.py +++ b/plugin.video.alfa/platformcode/platformtools.py @@ -516,7 +516,7 @@ def set_context_commands(item, parent_item): from_action=item.action).tourl()))) # Añadir a Alfavoritos (Mis enlaces) if item.channel not in ["favorites", "videolibrary", "help", ""] and parent_item.channel != "favorites": - context_commands.append(('[COLOR blue]Guardar enlace[/COLOR]', "XBMC.RunPlugin(%s?%s)" % + context_commands.append(('[COLOR blue]%s[/COLOR]' % config.get_localized_string(70557), "XBMC.RunPlugin(%s?%s)" % (sys.argv[0], item.clone(channel="alfavorites", action="addFavourite", from_channel=item.channel, from_action=item.action).tourl()))) @@ -538,7 +538,7 @@ def set_context_commands(item, parent_item): mediatype = 'tv' else: mediatype = item.contentType - context_commands.append(("[COLOR yellow]Buscar Similares[/COLOR]", "XBMC.Container.Update (%s?%s)" % ( + context_commands.append(("[COLOR yellow]%s[/COLOR]" % config.get_localized_string(70561), "XBMC.Container.Update (%s?%s)" % ( sys.argv[0], item.clone(channel='search', action='discover_list', search_type='list', page='1', list_type='%s/%s/similar' % (mediatype,item.infoLabels['tmdb_id'])).tourl()))) From 8ff7249ed45ff972622243b571398c5908ce7e49 Mon Sep 17 00:00:00 2001 From: Intel1 Date: Fri, 7 Sep 2018 11:15:10 -0500 Subject: [PATCH 14/34] Varios 1 cineasiaenlinea: web no existe repelis: updated zentorrents: eliminado web no estable clipwatchings: fix test_video_url thevid: nuevo server vivio: nuevo server thevideome: pattern updated --- .../channels/cineasiaenlinea.json | 61 - plugin.video.alfa/channels/cineasiaenlinea.py | 177 -- plugin.video.alfa/channels/repelis.py | 34 +- plugin.video.alfa/channels/zentorrents.json | 24 - plugin.video.alfa/channels/zentorrents.py | 1419 ----------------- plugin.video.alfa/servers/clipwatching.py | 2 +- plugin.video.alfa/servers/thevid.json | 42 + plugin.video.alfa/servers/thevid.py | 30 + plugin.video.alfa/servers/thevideome.json | 2 +- plugin.video.alfa/servers/vevio.json | 42 + plugin.video.alfa/servers/vevio.py | 29 + 11 files changed, 168 insertions(+), 1694 deletions(-) delete mode 100755 plugin.video.alfa/channels/cineasiaenlinea.json delete mode 100755 plugin.video.alfa/channels/cineasiaenlinea.py delete mode 100755 plugin.video.alfa/channels/zentorrents.json delete mode 100755 plugin.video.alfa/channels/zentorrents.py create mode 100644 plugin.video.alfa/servers/thevid.json create mode 100644 plugin.video.alfa/servers/thevid.py create mode 100644 plugin.video.alfa/servers/vevio.json create mode 100644 plugin.video.alfa/servers/vevio.py diff --git a/plugin.video.alfa/channels/cineasiaenlinea.json b/plugin.video.alfa/channels/cineasiaenlinea.json deleted file mode 100755 index 68ea28e2..00000000 --- a/plugin.video.alfa/channels/cineasiaenlinea.json +++ /dev/null @@ -1,61 +0,0 @@ -{ - "id": "cineasiaenlinea", - "name": "CineAsiaEnLinea", - "active": true, - "adult": false, - "language": ["cast", "lat"], - "thumbnail": "http://i.imgur.com/5KOU8uy.png?3", - "banner": "cineasiaenlinea.png", - "categories": [ - "movie", - "vos" - ], - "settings": [ - { - "id": "modo_grafico", - "type": "bool", - "label": "Buscar información extra", - "default": true, - "enabled": true, - "visible": true - }, - { - "id": "include_in_global_search", - "type": "bool", - "label": "Incluir en búsqueda global", - "default": true, - "enabled": true, - "visible": true - }, - { - "id": "include_in_newest_peliculas", - "type": "bool", - "label": "Incluir en Novedades - Películas", - "default": true, - "enabled": true, - "visible": true - }, - { - "id": "include_in_newest_terror", - "type": "bool", - "label": "Incluir en Novedades - terror", - "default": true, - "enabled": true, - "visible": true - }, - { - "id": "perfil", - "type": "list", - "label": "Perfil de color", - "default": 3, - "enabled": true, - "visible": true, - "lvalues": [ - "Sin color", - "Perfil 3", - "Perfil 2", - "Perfil 1" - ] - } - ] -} \ No newline at end of file diff --git a/plugin.video.alfa/channels/cineasiaenlinea.py b/plugin.video.alfa/channels/cineasiaenlinea.py deleted file mode 100755 index 968b9095..00000000 --- a/plugin.video.alfa/channels/cineasiaenlinea.py +++ /dev/null @@ -1,177 +0,0 @@ -# -*- coding: utf-8 -*- - -import re - -from core import httptools -from core import scrapertools -from core import servertools -from core import tmdb -from core.item import Item -from platformcode import config, logger -from channelselector import get_thumb - -host = "http://www.cineasiaenlinea.com/" -__channel__='cineasiaenlinea' - -try: - __modo_grafico__ = config.get_setting('modo_grafico', __channel__) -except: - __modo_grafico__ = True - -# Configuracion del canal -__perfil__ = int(config.get_setting('perfil', 'cineasiaenlinea')) - -# Fijar perfil de color -perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'], - ['0xFFA5F6AF', '0xFF5FDA6D', '0xFF11811E'], - ['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE']] - -if __perfil__ - 1 >= 0: - color1, color2, color3 = perfil[__perfil__ - 1] -else: - color1 = color2 = color3 = "" - - -def mainlist(item): - logger.info() - itemlist = [] - - itemlist.append(item.clone(action="peliculas", title="Novedades", url=host + "archivos/peliculas", - thumbnail=get_thumb('newest', auto=True), text_color=color1,)) - itemlist.append(item.clone(action="peliculas", title="Estrenos", url=host + "archivos/estrenos", - thumbnail=get_thumb('premieres', auto=True), text_color=color1)) - itemlist.append(item.clone(action="indices", title="Por géneros", url=host, - thumbnail=get_thumb('genres', auto=True), text_color=color1)) - itemlist.append(item.clone(action="indices", title="Por país", url=host, text_color=color1, - thumbnail=get_thumb('country', auto=True))) - itemlist.append(item.clone(action="indices", title="Por año", url=host, text_color=color1, - thumbnail=get_thumb('year', auto=True))) - - itemlist.append(item.clone(title="", action="")) - itemlist.append(item.clone(action="search", title="Buscar...", text_color=color3, - thumbnail=get_thumb('search', auto=True))) - itemlist.append(item.clone(action="configuracion", title="Configurar canal...", text_color="gold", folder=False)) - - return itemlist - - -def configuracion(item): - from platformcode import platformtools - ret = platformtools.show_channel_settings() - platformtools.itemlist_refresh() - return ret - - -def search(item, texto): - logger.info() - - item.url = "%s?s=%s" % (host, texto.replace(" ", "+")) - - try: - return peliculas(item) - # Se captura la excepción, para no interrumpir al buscador global si un canal falla - except: - import sys - for line in sys.exc_info(): - logger.error("%s" % line) - return [] - - -def newest(categoria): - logger.info() - itemlist = [] - item = Item() - try: - if categoria == 'peliculas': - item.url = host + "archivos/peliculas" - elif categoria == 'terror': - item.url = host + "genero/terror" - item.action = "peliculas" - itemlist = peliculas(item) - - if itemlist[-1].action == "peliculas": - itemlist.pop() - - # Se captura la excepción, para no interrumpir al canal novedades si un canal falla - except: - import sys - for line in sys.exc_info(): - logger.error("{0}".format(line)) - return [] - - return itemlist - - -def peliculas(item): - logger.info() - itemlist = [] - item.text_color = color2 - - # Descarga la página - data = httptools.downloadpage(item.url).data - - patron = '

([^<]+)<.*?src="([^"]+)".*?

([^<]+)<') - elif "año" in item.title: - bloque = scrapertools.find_single_match(data, '(?i)

Peliculas por Año

(.*?)') - matches = scrapertools.find_multiple_matches(bloque, '
([^<]+)<') - - for scrapedurl, scrapedtitle in matches: - if "año" in item.title: - scrapedurl = "%sfecha-estreno/%s" % (host, scrapedurl) - itemlist.append(Item(channel=item.channel, action="peliculas", title=scrapedtitle, url=scrapedurl, - thumbnail=item.thumbnail, text_color=color3)) - - return itemlist - - -def findvideos(item): - logger.info() - data = httptools.downloadpage(item.url).data - item.infoLabels["plot"] = scrapertools.find_single_match(data, '(?i)

SINOPSIS.*?

(.*?)

') - item.infoLabels["trailer"] = scrapertools.find_single_match(data, 'src="(http://www.youtube.com/embed/[^"]+)"') - - itemlist = servertools.find_video_items(item=item, data=data) - for it in itemlist: - it.thumbnail = item.thumbnail - it.text_color = color2 - - itemlist.append(item.clone(action="add_pelicula_to_library", title="Añadir película a la videoteca")) - if item.infoLabels["trailer"]: - folder = True - if config.is_xbmc(): - folder = False - itemlist.append(item.clone(channel="trailertools", action="buscartrailer", title="Ver Trailer", folder=folder, - contextual=not folder)) - - return itemlist diff --git a/plugin.video.alfa/channels/repelis.py b/plugin.video.alfa/channels/repelis.py index ca9118b4..2a6e756d 100644 --- a/plugin.video.alfa/channels/repelis.py +++ b/plugin.video.alfa/channels/repelis.py @@ -14,11 +14,12 @@ from core import scrapertools from core import servertools from core import tmdb from core.item import Item -from platformcode import config, logger +from lib import jsunpack +from platformcode import config, logger, platformtools idio = {'es-mx': 'LAT','es-es': 'ESP','en': 'VO'} -cali = {'poor': 'SD','low': 'SD','high': 'HD'} +cali = {'poor': 'SD','low': 'SD','medium': 'HD','high': 'HD'} list_language = idio.values() list_quality = ["SD","HD"] @@ -44,9 +45,17 @@ def mainlist(item): itemlist.append(Item(channel = item.channel, title = "Por género", action = "generos", url = host, extra = "Genero", thumbnail = get_thumb("genres", auto = True) )) itemlist.append(Item(channel = item.channel, title = "")) itemlist.append(Item(channel = item.channel, title = "Buscar", action = "search", url = host + "/search?term=", thumbnail = get_thumb("search", auto = True))) + itemlist.append(item.clone(title="Configurar canal...", text_color="gold", action="configuracion", folder=False)) autoplay.show_option(item.channel, itemlist) return itemlist + +def configuracion(item): + ret = platformtools.show_channel_settings() + platformtools.itemlist_refresh() + return ret + + def destacadas(item): logger.info() itemlist = [] @@ -178,12 +187,10 @@ def findvideos(item): dict = jsontools.load(bloque) urlx = httptools.downloadpage(host + dict[0]["url"]) #Para que pueda saltar el cloudflare, se tiene que descargar la página completa for datos in dict: - url1 = httptools.downloadpage(host + datos["url"], follow_redirects=False, only_headers=True).headers.get("location", "") - titulo = "Ver en: %s (" + cali[datos["quality"]] + ") (" + idio[datos["audio"]] + ")" - text_color = "white" - if "youtube" in url1: - titulo = "Ver trailer: %s" - text_color = "yellow" + url1 = datos["url"] + hostname = scrapertools.find_single_match(datos["hostname"].replace("www.",""), "(.*?)\.") + if hostname == "my": hostname = "mailru" + titulo = "Ver en: " + hostname.capitalize() + " (" + cali[datos["quality"]] + ") (" + idio[datos["audio"]] + ")" itemlist.append( item.clone(channel = item.channel, action = "play", @@ -192,7 +199,6 @@ def findvideos(item): title = titulo, url = url1 )) - itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize()) itemlist.sort(key=lambda it: (it.language, it.server)) tmdb.set_infoLabels(itemlist, __modo_grafico__) # Requerido para FilterTools @@ -217,5 +223,11 @@ def findvideos(item): def play(item): - item.thumbnail = item.contentThumbnail - return [item] + itemlist = [] + url1 = httptools.downloadpage(host + item.url, follow_redirects=False, only_headers=True).headers.get("location", "") + if "storage" in url1: + url1 = scrapertools.find_single_match(url1, "src=(.*mp4)").replace("%3A",":").replace("%2F","/") + itemlist.append(item.clone(url=url1)) + itemlist = servertools.get_servers_itemlist(itemlist) + itemlist[0].thumbnail = item.contentThumbnail + return itemlist diff --git a/plugin.video.alfa/channels/zentorrents.json b/plugin.video.alfa/channels/zentorrents.json deleted file mode 100755 index fa567d66..00000000 --- a/plugin.video.alfa/channels/zentorrents.json +++ /dev/null @@ -1,24 +0,0 @@ -{ - "id": "zentorrents", - "name": "Zentorrent", - "active": false, - "adult": false, - "language": ["cast"], - "banner": "zentorrents.png", - "thumbnail": "http://s6.postimg.cc/9zv90yjip/zentorrentlogo.jpg", - "categories": [ - "torrent", - "movie", - "tvshow" - ], - "settings": [ - { - "id": "include_in_global_search", - "type": "bool", - "label": "Incluir en busqueda global", - "default": true, - "enabled": true, - "visible": true - } - ] -} \ No newline at end of file diff --git a/plugin.video.alfa/channels/zentorrents.py b/plugin.video.alfa/channels/zentorrents.py deleted file mode 100755 index c633b14a..00000000 --- a/plugin.video.alfa/channels/zentorrents.py +++ /dev/null @@ -1,1419 +0,0 @@ -# -*- coding: utf-8 -*- - -import os -import re -import unicodedata -import urllib -import urlparse - -import xbmc -import xbmcgui -from core import httptools -from core import scrapertools -from core.item import Item -from core.scrapertools import decodeHtmlentities as dhe -from platformcode import config, logger - -ACTION_SHOW_FULLSCREEN = 36 -ACTION_GESTURE_SWIPE_LEFT = 511 -ACTION_SELECT_ITEM = 7 -ACTION_PREVIOUS_MENU = 10 -ACTION_MOVE_LEFT = 1 -ACTION_MOVE_RIGHT = 2 -ACTION_MOVE_DOWN = 4 -ACTION_MOVE_UP = 3 -OPTION_PANEL = 6 -OPTIONS_OK = 5 - -host = "http://www.zentorrents.com/" - -api_key = "2e2160006592024ba87ccdf78c28f49f" -api_fankey = "dffe90fba4d02c199ae7a9e71330c987" - - -def mainlist(item): - logger.info() - - itemlist = [] - itemlist.append( - Item(channel=item.channel, title="Películas", action="peliculas", url="http://www.zentorrents.com/peliculas", - thumbnail="http://www.navymwr.org/assets/movies/images/img-popcorn.png", - fanart="http://s18.postimg.cc/u9wyvm809/zen_peliculas.jpg")) - itemlist.append( - Item(channel=item.channel, title="MicroHD", action="peliculas", url="http://www.zentorrents.com/tags/microhd", - thumbnail="http://s11.postimg.cc/5s67cden7/microhdzt.jpg", - fanart="http://s9.postimg.cc/i5qhadsjj/zen_1080.jpg")) - itemlist.append( - Item(channel=item.channel, title="HDrip", action="peliculas", url="http://www.zentorrents.com/tags/hdrip", - thumbnail="http://s10.postimg.cc/pft9z4c5l/hdripzent.jpg", - fanart="http://s15.postimg.cc/5kqx9ln7v/zen_720.jpg")) - itemlist.append( - Item(channel=item.channel, title="Series", action="peliculas", url="http://www.zentorrents.com/series", - thumbnail="http://imgur.com/HbM2dt5.png", fanart="http://s10.postimg.cc/t0xz1t661/zen_series.jpg")) - itemlist.append(Item(channel=item.channel, title="Buscar...", action="search", url="", - thumbnail="http://newmedia-art.pl/product_picture/full_size/bed9a8589ad98470258899475cf56cca.jpg", - fanart="http://s23.postimg.cc/jdutugvrf/zen_buscar.jpg")) - - return itemlist - - -def search(item, texto): - logger.info() - - texto = texto.replace(" ", "+") - item.url = "http://www.zentorrents.com//buscar?searchword=%s&ordering=&searchphrase=all&limit=\d+" % (texto) - # item.url = item.url % texto - # itemlist.extend(buscador(item, texto.replace("+", " "))) - item.extra = str(texto) - - try: - return buscador(item) - except: - import sys - for line in sys.exc_info(): - logger.error("%s" % line) - return [] - - -def buscador(item): - logger.info() - itemlist = [] - # Descarga la página - data = httptools.downloadpage(item.url).data - data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) - pepe = item.extra - pepe = pepe.replace("+", " ") - if "highlight" in data: - searchword = scrapertools.get_match(data, '([^<]+)') - data = re.sub(r'[^<]+', searchword, data) - - patron = '
' # Empezamos el patrón por aquí para que no se cuele nada raro - patron += '|

|&|amp;", "", data) - - #

En Un Patio De Paris [DVD Rip]
21/01/2015
[DVD Rip][AC3 5.1 Español Castellano][2014] Antoine es un músico de 40 años que de pronto decide abandonar su carrera.
- - patron = '
En Un Patio De Paris [DVD Rip]
21/01/2015
[DVD Rip][AC3 5.1 Español Castellano][2014] Antoine es un músico de 40 años que de pronto decide abandonar su carrera.
- - patron = '
0: - scrapedurl = urlparse.urljoin(item.url, matches[0]) - title = "[COLOR chocolate]siguiente>>[/COLOR]" - itemlist.append(Item(channel=item.channel, action="peliculas", title=title, url=scrapedurl, - thumbnail="http://s6.postimg.cc/9iwpso8k1/ztarrow2.png", - fanart="http://s6.postimg.cc/4j8vdzy6p/zenwallbasic.jpg", folder=True)) - - return itemlist - - -def fanart(item): - logger.info() - itemlist = [] - url = item.url - data = httptools.downloadpage(url).data - data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) - title_fan = item.extra - title = re.sub(r'Serie Completa|3D|Temporada.*?Completa', '', title_fan) - title = title.replace(' ', '%20') - title = ''.join((c for c in unicodedata.normalize('NFD', unicode(title.decode('utf-8'))) if - unicodedata.category(c) != 'Mn')).encode("ascii", "ignore") - item.title = re.sub(r'\(.*?\)|\[.*?\]', '', item.title) - item.title = '[COLOR floralwhite]' + item.title + '[/COLOR]' - try: - sinopsis = scrapertools.get_match(data, 'onload="imgLoaded.*?

(.*?)

') - sinopsis = re.sub(r"<\p>

", "", sinopsis) - except: - sinopsis = "" - if not "series" in item.url: - - # filmafinity - title = re.sub(r"cerdas", "cuerdas", title) - url_bing = "http://www.bing.com/search?q=%s+site:filmaffinity.com" % (title.replace(' ', '+')) - data = browser(url_bing) - data = re.sub(r"\n|\r|\t|\s{2}| |", "", data) - - try: - if "myaddrproxy.php" in data: - subdata_bing = scrapertools.get_match(data, - 'li class="b_algo">

(

(Año.*?>(.*?)') - except: - year = "" - if sinopsis == " ": - try: - sinopsis = scrapertools.find_single_match(data, '
(.*?)
') - sinopsis = sinopsis.replace("

", "\n") - sinopsis = re.sub(r"\(FILMAFFINITY\)
", "", sinopsis) - except: - pass - try: - rating_filma = scrapertools.get_match(data, 'itemprop="ratingValue" content="(.*?)">') - except: - rating_filma = "Sin puntuacion" - - critica = "" - patron = '
(.*?)
.*?itemprop="author">(.*?)\s*

(

(Año.*?>(.*?)') - except: - year = "" - if sinopsis == " ": - try: - sinopsis = scrapertools.find_single_match(data, '
(.*?)
') - sinopsis = sinopsis.replace("

", "\n") - sinopsis = re.sub(r"\(FILMAFFINITY\)
", "", sinopsis) - except: - pass - try: - rating_filma = scrapertools.get_match(data, 'itemprop="ratingValue" content="(.*?)">') - except: - rating_filma = "Sin puntuacion" - print "lobeznito" - print rating_filma - - critica = "" - patron = '
(.*?)
.*?itemprop="author">(.*?)\s*(.*?)h="ID.*?.*?TV Series') - except: - pass - - try: - imdb_id = scrapertools.get_match(subdata_imdb, '
(.*?)<') - except: - ratintg_tvdb = "" - try: - rating = scrapertools.get_match(data, '"vote_average":(.*?),') - except: - - rating = "Sin puntuación" - - id_scraper = id_tmdb + "|" + "serie" + "|" + rating_filma + "|" + critica + "|" + rating + "|" + status # +"|"+emision - posterdb = scrapertools.find_single_match(data_tmdb, '"poster_path":(.*?)","popularity"') - - if "null" in posterdb: - posterdb = item.thumbnail - else: - posterdb = re.sub(r'\\|"', '', posterdb) - posterdb = "https://image.tmdb.org/t/p/original" + posterdb - - if "null" in fan: - fanart = item.fanart - else: - fanart = "https://image.tmdb.org/t/p/original" + fan - - item.extra = fanart - - url = "http://api.themoviedb.org/3/tv/" + id_tmdb + "/images?api_key=" + api_key + "" - data = httptools.downloadpage(url).data - data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) - - patron = '"backdrops".*?"file_path":".*?",.*?"file_path":"(.*?)",.*?"file_path":"(.*?)",.*?"file_path":"(.*?)"' - matches = re.compile(patron, re.DOTALL).findall(data) - - if len(matches) == 0: - patron = '"backdrops".*?"file_path":"(.*?)",.*?"file_path":"(.*?)",.*?"file_path":"(.*?)"' - matches = re.compile(patron, re.DOTALL).findall(data) - if len(matches) == 0: - fanart_info = item.extra - fanart_3 = "" - fanart_2 = item.extra - for fanart_info, fanart_3, fanart_2 in matches: - if fanart == item.fanart: - fanart = "https://image.tmdb.org/t/p/original" + fanart_info - fanart_info = "https://image.tmdb.org/t/p/original" + fanart_info - fanart_3 = "https://image.tmdb.org/t/p/original" + fanart_3 - fanart_2 = "https://image.tmdb.org/t/p/original" + fanart_2 - url = "http://webservice.fanart.tv/v3/tv/" + id + "?api_key=" + api_fankey - data = httptools.downloadpage(url).data - data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) - patron = '"clearlogo":.*?"url": "([^"]+)"' - matches = re.compile(patron, re.DOTALL).findall(data) - if '"tvbanner"' in data: - tvbanner = scrapertools.get_match(data, '"tvbanner":.*?"url": "([^"]+)"') - tfv = tvbanner - elif '"tvposter"' in data: - tvposter = scrapertools.get_match(data, '"tvposter":.*?"url": "([^"]+)"') - tfv = tvposter - else: - tfv = posterdb - if '"tvthumb"' in data: - tvthumb = scrapertools.get_match(data, '"tvthumb":.*?"url": "([^"]+)"') - if '"hdtvlogo"' in data: - hdtvlogo = scrapertools.get_match(data, '"hdtvlogo":.*?"url": "([^"]+)"') - if '"hdclearart"' in data: - hdtvclear = scrapertools.get_match(data, '"hdclearart":.*?"url": "([^"]+)"') - if len(matches) == 0: - if '"hdtvlogo"' in data: - if "showbackground" in data: - - if '"hdclearart"' in data: - thumbnail = hdtvlogo - extra = hdtvclear + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - else: - thumbnail = hdtvlogo - extra = thumbnail + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - itemlist.append(Item(channel=item.channel, title=item.title, action="findvideos", url=item.url, - server="torrent", thumbnail=thumbnail, fanart=item.extra, - category=category, extra=extra, show=show, folder=True)) - - - else: - if '"hdclearart"' in data: - thumbnail = hdtvlogo - extra = hdtvclear + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - else: - thumbnail = hdtvlogo - extra = thumbnail + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - itemlist.append(Item(channel=item.channel, title=item.title, action="findvideos", url=item.url, - server="torrent", thumbnail=thumbnail, fanart=item.extra, extra=extra, - show=show, category=category, folder=True)) - else: - extra = "" + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - itemlist.append(Item(channel=item.channel, title=item.title, action="findvideos", url=item.url, - server="torrent", thumbnail=posterdb, fanart=fanart, extra=extra, show=show, - category=category, folder=True)) - - for logo in matches: - if '"hdtvlogo"' in data: - thumbnail = hdtvlogo - elif not '"hdtvlogo"' in data: - if '"clearlogo"' in data: - thumbnail = logo - else: - thumbnail = item.thumbnail - if '"clearart"' in data: - clear = scrapertools.get_match(data, '"clearart":.*?"url": "([^"]+)"') - if "showbackground" in data: - - extra = clear + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - itemlist.append(Item(channel=item.channel, title=item.title, action="findvideos", url=item.url, - server="torrent", thumbnail=thumbnail, fanart=item.extra, extra=extra, - show=show, category=category, folder=True)) - else: - extra = clear + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - itemlist.append(Item(channel=item.channel, title=item.title, action="findvideos", url=item.url, - server="torrent", thumbnail=thumbnail, fanart=item.extra, extra=extra, - show=show, category=category, folder=True)) - - if "showbackground" in data: - - if '"clearart"' in data: - clear = scrapertools.get_match(data, '"clearart":.*?"url": "([^"]+)"') - extra = clear + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - else: - extra = logo + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - itemlist.append(Item(channel=item.channel, title=item.title, action="findvideos", url=item.url, - server="torrent", thumbnail=thumbnail, fanart=item.extra, extra=extra, - show=show, category=category, folder=True)) - - if not '"clearart"' in data and not '"showbackground"' in data: - if '"hdclearart"' in data: - extra = hdtvclear + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - else: - extra = thumbnail + "|" + year - show = fanart_2 + "|" + fanart_3 + "|" + sinopsis + "|" + title_fan + "|" + tfv + "|" + id_tmdb - itemlist.append(Item(channel=item.channel, title=item.title, action="findvideos", url=item.url, - server="torrent", thumbnail=thumbnail, fanart=item.extra, extra=extra, - show=show, category=category, folder=True)) - - title_info = "Info" - title_info = "[COLOR skyblue]" + title_info + "[/COLOR]" - if not "series" in item.url: - thumbnail = posterdb - - if "series" in item.url: - - if '"tvposter"' in data: - thumbnail = scrapertools.get_match(data, '"tvposter":.*?"url": "([^"]+)"') - else: - thumbnail = posterdb - - if "tvbanner" in data: - category = tvbanner - else: - category = show - if '"tvthumb"' in data: - plot = item.plot + "|" + tvthumb - else: - plot = item.plot + "|" + item.thumbnail - if '"tvbanner"' in data: - plot = plot + "|" + tvbanner - elif '"tvthumb"' in data: - plot = plot + "|" + tvthumb - else: - plot = plot + "|" + item.thumbnail - else: - if '"moviethumb"' in data: - plot = item.plot + "|" + thumb - else: - plot = item.plot + "|" + posterdb - - if '"moviebanner"' in data: - plot = plot + "|" + banner - else: - if '"hdmovieclearart"' in data: - plot = plot + "|" + clear - - else: - plot = plot + "|" + posterdb - id = id_scraper - - extra = extra + "|" + id + "|" + title.encode('utf8') - - itemlist.append( - Item(channel=item.channel, action="info", title=title_info, plo=plot, url=item.url, thumbnail=thumbnail, - fanart=fanart_info, extra=extra, category=category, show=show, folder=False)) - - return itemlist - - -def findvideos(item): - logger.info() - - if not "serie" in item.url: - thumbnail = item.category - else: - thumbnail = item.show.split("|")[4] - itemlist = [] - - # Descarga la página - data = httptools.downloadpage(item.url).data - data = re.sub(r"\n|\r|\t|\s{2}| |&|amp;", "", data) - - patron = '

(.*?)

.*?src="([^"]+)".*?

= 5 and int(check_rat_tmdba) < 8: - rating = "[COLOR springgreen][B]" + rating_tmdba_tvdb + "[/B][/COLOR]" - elif int(check_rat_tmdba) >= 8 or rating_tmdba_tvdb == 10: - rating = "[COLOR yellow][B]" + rating_tmdba_tvdb + "[/B][/COLOR]" - else: - rating = "[COLOR crimson][B]" + rating_tmdba_tvdb + "[/B][/COLOR]" - print "lolaymaue" - except: - rating = "[COLOR crimson][B]" + rating_tmdba_tvdb + "[/B][/COLOR]" - if "10." in rating: - rating = re.sub(r'10\.\d+', '10', rating) - try: - check_rat_filma = scrapertools.get_match(rating_filma, '(\d)') - print "paco" - print check_rat_filma - if int(check_rat_filma) >= 5 and int(check_rat_filma) < 8: - print "dios" - print check_rat_filma - rating_filma = "[COLOR springgreen][B]" + rating_filma + "[/B][/COLOR]" - elif int(check_rat_filma) >= 8: - - print check_rat_filma - rating_filma = "[COLOR yellow][B]" + rating_filma + "[/B][/COLOR]" - else: - rating_filma = "[COLOR crimson][B]" + rating_filma + "[/B][/COLOR]" - print "rojo??" - print check_rat_filma - except: - rating_filma = "[COLOR crimson][B]" + rating_filma + "[/B][/COLOR]" - - try: - if not "serie" in item.url: - url_plot = "http://api.themoviedb.org/3/movie/" + item.extra.split("|")[ - 1] + "?api_key=" + api_key + "&append_to_response=credits&language=es" - data_plot = httptools.downloadpage(url_plot).data - plot, tagline = scrapertools.find_single_match(data_plot, '"overview":"(.*?)",.*?"tagline":(".*?")') - if plot == "": - plot = item.show.split("|")[2] - - plot = "[COLOR moccasin][B]" + plot + "[/B][/COLOR]" - plot = re.sub(r"\\", "", plot) - - else: - plot = item.show.split("|")[2] - plot = "[COLOR moccasin][B]" + plot + "[/B][/COLOR]" - plot = re.sub(r"\\|

|

", "", plot) - - if item.extra.split("|")[7] != "": - tagline = item.extra.split("|")[7] - # tagline= re.sub(r',','.',tagline) - else: - tagline = "" - except: - title = "[COLOR red][B]LO SENTIMOS...[/B][/COLOR]" - plot = "Esta pelicula no tiene informacion..." - plot = plot.replace(plot, "[COLOR yellow][B]" + plot + "[/B][/COLOR]") - photo = "http://s6.postimg.cc/nm3gk1xox/noinfosup2.png" - foto = "http://s6.postimg.cc/ub7pb76c1/noinfo.png" - info = "" - - if "serie" in item.url: - check2 = "serie" - icon = "http://s6.postimg.cc/hzcjag975/tvdb.png" - foto = item.show.split("|")[1] - if item.extra.split("|")[5] != "": - critica = item.extra.split("|")[5] - else: - critica = "Esta serie no tiene críticas..." - - photo = item.extra.split("|")[0].replace(" ", "%20") - try: - tagline = "[COLOR aquamarine][B]" + tagline + "[/B][/COLOR]" - except: - tagline = "" - - else: - critica = item.extra.split("|")[5] - if "%20" in critica: - critica = "No hay críticas" - icon = "http://imgur.com/SenkyxF.png" - - photo = item.extra.split("|")[0].replace(" ", "%20") - foto = item.show.split("|")[1] - - try: - if tagline == "\"\"": - tagline = " " - except: - tagline = " " - tagline = "[COLOR aquamarine][B]" + tagline + "[/B][/COLOR]" - check2 = "pelicula" - # Tambien te puede interesar - peliculas = [] - if "serie" in item.url: - - url_tpi = "http://api.themoviedb.org/3/tv/" + item.show.split("|")[ - 5] + "/recommendations?api_key=" + api_key + "&language=es" - data_tpi = httptools.downloadpage(url_tpi).data - tpi = scrapertools.find_multiple_matches(data_tpi, - 'id":(.*?),.*?"original_name":"(.*?)",.*?"poster_path":(.*?),"popularity"') - - else: - url_tpi = "http://api.themoviedb.org/3/movie/" + item.extra.split("|")[ - 1] + "/recommendations?api_key=" + api_key + "&language=es" - data_tpi = httptools.downloadpage(url_tpi).data - tpi = scrapertools.find_multiple_matches(data_tpi, - 'id":(.*?),.*?"original_title":"(.*?)",.*?"poster_path":(.*?),"popularity"') - - for idp, peli, thumb in tpi: - - thumb = re.sub(r'"|}', '', thumb) - if "null" in thumb: - thumb = "http://s6.postimg.cc/tw1vhymj5/noposter.png" - else: - thumb = "https://image.tmdb.org/t/p/original" + thumb - peliculas.append([idp, peli, thumb]) - - check2 = check2.replace("pelicula", "movie").replace("serie", "tvshow") - infoLabels = {'title': title, 'plot': plot, 'thumbnail': photo, 'fanart': foto, 'tagline': tagline, - 'rating': rating} - item_info = item.clone(info=infoLabels, icon=icon, extra=id, rating=rating, rating_filma=rating_filma, - critica=critica, contentType=check2, thumb_busqueda="http://imgur.com/OZ1Vg3D.png") - from channels import infoplus - infoplus.start(item_info, peliculas) - - -def info_capitulos(item): - logger.info() - - url = "https://api.themoviedb.org/3/tv/" + item.show.split("|")[5] + "/season/" + item.extra.split("|")[ - 2] + "/episode/" + item.extra.split("|")[3] + "?api_key=" + api_key + "&language=es" - - if "/0" in url: - url = url.replace("/0", "/") - - data = httptools.downloadpage(url).data - data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) - - patron = '],"name":"(.*?)","overview":"(.*?)".*?"still_path":(.*?),"vote_average":(\d+\.\d).*?,"' - matches = re.compile(patron, re.DOTALL).findall(data) - - if len(matches) == 0: - - url = "http://thetvdb.com/api/1D62F2F90030C444/series/" + item.category + "/default/" + item.extra.split("|")[ - 2] + "/" + item.extra.split("|")[3] + "/es.xml" - if "/0" in url: - url = url.replace("/0", "/") - data = httptools.downloadpage(url).data - data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) - - patron = '.*?([^<]+).*?(.*?).*?(.*?)' - - matches = re.compile(patron, re.DOTALL).findall(data) - - if len(matches) == 0: - - title = "[COLOR red][B]LO SENTIMOS...[/B][/COLOR]" - plot = "Este capitulo no tiene informacion..." - plot = "[COLOR yellow][B]" + plot + "[/B][/COLOR]" - image = "http://s6.postimg.cc/ub7pb76c1/noinfo.png" - foto = "http://s6.postimg.cc/nm3gk1xox/noinfosup2.png" - rating = "" - - - else: - - for name_epi, info, rating in matches: - if "episodes" in data: - foto = scrapertools.get_match(data, '.*?(.*?)') - fanart = "http://thetvdb.com/banners/" + foto - else: - fanart = item.extra.split("|")[1] - plot = info - plot = "[COLOR peachpuff][B]" + plot + "[/B][/COLOR]" - title = name_epi.upper() - title = "[COLOR bisque][B]" + title + "[/B][/COLOR]" - image = fanart - foto = item.extra.split("|")[0] - if not ".png" in foto: - foto = "http://imgur.com/IqYaDrC.png" - foto = re.sub(r'\(.*?\)|" "|" "', '', foto) - foto = re.sub(r' ', '', foto) - try: - - check_rating = scrapertools.get_match(rating, '(\d+).') - - if int(check_rating) >= 5 and int(check_rating) < 8: - rating = "Puntuación " + "[COLOR springgreen][B]" + rating + "[/B][/COLOR]" - elif int(check_rating) >= 8 and int(check_rating) < 10: - rating = "Puntuación " + "[COLOR yellow][B]" + rating + "[/B][/COLOR]" - elif int(check_rating) == 10: - rating = "Puntuación " + "[COLOR orangered][B]" + rating + "[/B][/COLOR]" - else: - rating = "Puntuación " + "[COLOR crimson][B]" + rating + "[/B][/COLOR]" - - except: - rating = "Puntuación " + "[COLOR crimson][B]" + rating + "[/B][/COLOR]" - if "10." in rating: - rating = re.sub(r'10\.\d+', '10', rating) - else: - for name_epi, info, fanart, rating in matches: - if info == "" or info == "\\": - info = "Sin informacion del capítulo aún..." - plot = info - plot = re.sub(r'/n', '', plot) - plot = "[COLOR peachpuff][B]" + plot + "[/B][/COLOR]" - title = name_epi.upper() - title = "[COLOR bisque][B]" + title + "[/B][/COLOR]" - image = fanart - image = re.sub(r'"|}', '', image) - if "null" in image: - image = "http://imgur.com/ZiEAVOD.png" - else: - image = "https://image.tmdb.org/t/p/original" + image - foto = item.extra.split("|")[0] - if not ".png" in foto: - foto = "http://imgur.com/IqYaDrC.png" - foto = re.sub(r'\(.*?\)|" "|" "', '', foto) - foto = re.sub(r' ', '', foto) - try: - - check_rating = scrapertools.get_match(rating, '(\d+).') - - if int(check_rating) >= 5 and int(check_rating) < 8: - rating = "Puntuación " + "[COLOR springgreen][B]" + rating + "[/B][/COLOR]" - elif int(check_rating) >= 8 and int(check_rating) < 10: - rating = "Puntuación " + "[COLOR yellow][B]" + rating + "[/B][/COLOR]" - elif int(check_rating) == 10: - rating = "Puntuación " + "[COLOR orangered][B]" + rating + "[/B][/COLOR]" - else: - rating = "Puntuación " + "[COLOR crimson][B]" + rating + "[/B][/COLOR]" - - except: - rating = "Puntuación " + "[COLOR crimson][B]" + rating + "[/B][/COLOR]" - if "10." in rating: - rating = re.sub(r'10\.\d+', '10', rating) - ventana = TextBox2(title=title, plot=plot, thumbnail=image, fanart=foto, rating=rating) - ventana.doModal() - - -class TextBox2(xbmcgui.WindowDialog): - """ Create a skinned textbox window """ - - def __init__(self, *args, **kwargs): - self.getTitle = kwargs.get('title') - self.getPlot = kwargs.get('plot') - self.getThumbnail = kwargs.get('thumbnail') - self.getFanart = kwargs.get('fanart') - self.getRating = kwargs.get('rating') - - self.background = xbmcgui.ControlImage(70, 20, 1150, 630, 'http://imgur.com/133aoMw.jpg') - self.title = xbmcgui.ControlTextBox(120, 60, 430, 50) - self.rating = xbmcgui.ControlTextBox(145, 112, 1030, 45) - self.plot = xbmcgui.ControlTextBox(120, 150, 1056, 100) - self.thumbnail = xbmcgui.ControlImage(120, 300, 1056, 300, self.getThumbnail) - self.fanart = xbmcgui.ControlImage(780, 43, 390, 100, self.getFanart) - - self.addControl(self.background) - self.background.setAnimations( - [('conditional', 'effect=slide start=1000% end=0% time=1500 condition=true tween=bounce',), - ('WindowClose', 'effect=slide delay=800 start=0% end=1000% time=800 condition=true',)]) - self.addControl(self.thumbnail) - self.thumbnail.setAnimations([('conditional', - 'effect=zoom start=0% end=100% delay=2700 time=1500 condition=true tween=elastic easing=inout',), - ('WindowClose', 'effect=slide end=0,700% time=300 condition=true',)]) - self.addControl(self.plot) - self.plot.setAnimations( - [('conditional', 'effect=zoom delay=2000 center=auto start=0 end=100 time=800 condition=true ',), ( - 'conditional', - 'effect=rotate delay=2000 center=auto aceleration=6000 start=0% end=360% time=800 condition=true',), - ('WindowClose', 'effect=zoom center=auto start=100% end=-0% time=600 condition=true',)]) - self.addControl(self.fanart) - self.fanart.setAnimations( - [('WindowOpen', 'effect=slide start=0,-700 delay=1000 time=2500 tween=bounce condition=true',), ( - 'conditional', - 'effect=rotate center=auto start=0% end=360% delay=3000 time=2500 tween=bounce condition=true',), - ('WindowClose', 'effect=slide end=0,-700% time=1000 condition=true',)]) - self.addControl(self.title) - self.title.setText(self.getTitle) - self.title.setAnimations( - [('conditional', 'effect=slide start=-1500% end=0% delay=1000 time=2000 condition=true tween=elastic',), - ('WindowClose', 'effect=slide start=0% end=-1500% time=800 condition=true',)]) - self.addControl(self.rating) - self.rating.setText(self.getRating) - self.rating.setAnimations( - [('conditional', 'effect=fade start=0% end=100% delay=3000 time=1500 condition=true',), - ('WindowClose', 'effect=slide end=0,-700% time=600 condition=true',)]) - xbmc.sleep(200) - - try: - self.plot.autoScroll(7000, 6000, 30000) - except: - - xbmc.executebuiltin( - 'Notification([COLOR red][B]Actualiza Kodi a su última versión[/B][/COLOR], [COLOR skyblue]para mejor info[/COLOR],8000,"https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/kodi-icon.png")') - self.plot.setText(self.getPlot) - - def get(self): - self.show() - - def onAction(self, action): - if action == ACTION_PREVIOUS_MENU or action.getId() == ACTION_GESTURE_SWIPE_LEFT or action == 110 or action == 92: - self.close() - - -def test(): - return True - - -def browser(url): - import mechanize - - # Utilizamos Browser mechanize para saltar problemas con la busqueda en bing - br = mechanize.Browser() - # Browser options - br.set_handle_equiv(False) - br.set_handle_gzip(True) - br.set_handle_redirect(True) - br.set_handle_referer(False) - br.set_handle_robots(False) - # Follows refresh 0 but not hangs on refresh > 0 - br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) - # Want debugging messages? - # br.set_debug_http(True) - # br.set_debug_redirects(True) - # br.set_debug_responses(True) - - # User-Agent (this is cheating, ok?) - br.addheaders = [('User-agent', - 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/600.7.12 (KHTML, like Gecko) Version/7.1.7 Safari/537.85.16')] - # br.addheaders =[('Cookie','SRCHD=AF=QBRE; domain=.bing.com; expires=25 de febrero de 2018 13:00:28 GMT+1; MUIDB=3B942052D204686335322894D3086911; domain=www.bing.com;expires=24 de febrero de 2018 13:00:28 GMT+1')] - # Open some site, let's pick a random one, the first that pops in mind - r = br.open(url) - response = r.read() - print response - # if not ".ftrH,.ftrHd,.ftrD>" in response: - if "img,divreturn" in response: - r = br.open("http://ssl-proxy.my-addr.org/myaddrproxy.php/" + url) - response = r.read() - - return response - - -def tokenize(text, match=re.compile("([idel])|(\d+):|(-?\d+)").match): - i = 0 - while i < len(text): - m = match(text, i) - s = m.group(m.lastindex) - i = m.end() - if m.lastindex == 2: - yield "s" - yield text[i:i + int(s)] - i = i + int(s) - else: - yield s - - -def decode_item(next, token): - if token == "i": - # integer: "i" value "e" - data = int(next()) - if next() != "e": - raise ValueError - elif token == "s": - # string: "s" value (virtual tokens) - data = next() - elif token == "l" or token == "d": - # container: "l" (or "d") values "e" - data = [] - tok = next() - while tok != "e": - data.append(decode_item(next, tok)) - tok = next() - if token == "d": - data = dict(zip(data[0::2], data[1::2])) - else: - raise ValueError - return data - - -def decode(text): - try: - src = tokenize(text) - data = decode_item(src.next, src.next()) - for token in src: # look for more tokens - raise SyntaxError("trailing junk") - except (AttributeError, ValueError, StopIteration): - try: - data = data - except: - data = src - - return data - - -def convert_size(size): - import math - if (size == 0): - return '0B' - size_name = ("B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB") - i = int(math.floor(math.log(size, 1024))) - p = math.pow(1024, i) - s = round(size / p, 2) - return '%s %s' % (s, size_name[i]) diff --git a/plugin.video.alfa/servers/clipwatching.py b/plugin.video.alfa/servers/clipwatching.py index 839c7290..2362fc4b 100644 --- a/plugin.video.alfa/servers/clipwatching.py +++ b/plugin.video.alfa/servers/clipwatching.py @@ -9,7 +9,7 @@ from platformcode import logger, config def test_video_exists(page_url): logger.info("(page_url='%s')" % page_url) data = httptools.downloadpage(page_url).data - if "File Not Found" in data: + if "File Not Found" in data or "File was deleted" in data: return False, config.get_localized_string(70292) % "ClipWatching" return True, "" diff --git a/plugin.video.alfa/servers/thevid.json b/plugin.video.alfa/servers/thevid.json new file mode 100644 index 00000000..e90af13e --- /dev/null +++ b/plugin.video.alfa/servers/thevid.json @@ -0,0 +1,42 @@ +{ + "active": true, + "find_videos": { + "ignore_urls": [], + "patterns": [ + { + "pattern": "(thevid.net/e/\\w+)", + "url": "https://\\1" + } + ] + }, + "free": true, + "id": "thevid", + "name": "thevid", + "settings": [ + { + "default": false, + "enabled": true, + "id": "black_list", + "label": "@60654", + "type": "bool", + "visible": true + }, + { + "default": 0, + "enabled": true, + "id": "favorites_servers_list", + "label": "@60655", + "lvalues": [ + "No", + "1", + "2", + "3", + "4", + "5" + ], + "type": "list", + "visible": false + } + ], + "thumbnail": "" +} diff --git a/plugin.video.alfa/servers/thevid.py b/plugin.video.alfa/servers/thevid.py new file mode 100644 index 00000000..8d9320bc --- /dev/null +++ b/plugin.video.alfa/servers/thevid.py @@ -0,0 +1,30 @@ +# -*- coding: utf-8 -*- + +from core import httptools +from core import scrapertools +from lib import jsunpack +from platformcode import logger, config + + +def test_video_exists(page_url): + logger.info("(page_url='%s')" % page_url) + data = httptools.downloadpage(page_url).data + if "Video not found..." in data: + return False, config.get_localized_string(70292) % "Thevid" + return True, "" + + +def get_video_url(page_url, user="", password="", video_password=""): + logger.info("(page_url='%s')" % page_url) + data = httptools.downloadpage(page_url).data + packed = scrapertools.find_multiple_matches(data, "(?s)") + for pack in packed: + unpacked = jsunpack.unpack(pack) + if "file" in unpacked: + videos = scrapertools.find_multiple_matches(unpacked, 'file.="(//[^"]+)') + video_urls = [] + for video in videos: + video = "https:" + video + video_urls.append(["mp4 [Thevid]", video]) + logger.info("Url: %s" % videos) + return video_urls diff --git a/plugin.video.alfa/servers/thevideome.json b/plugin.video.alfa/servers/thevideome.json index 568f0c90..4fb0f381 100755 --- a/plugin.video.alfa/servers/thevideome.json +++ b/plugin.video.alfa/servers/thevideome.json @@ -4,7 +4,7 @@ "ignore_urls": [], "patterns": [ { - "pattern": "(?:thevideo.me|tvad.me|thevid.net|thevideo.ch|thevideo.us)/(?:embed-|)([A-z0-9]+)", + "pattern": "(?:thevideo.me|tvad.me|thevideo.ch|thevideo.us)/(?:embed-|)([A-z0-9]+)", "url": "https://thevideo.me/embed-\\1.html" } ] diff --git a/plugin.video.alfa/servers/vevio.json b/plugin.video.alfa/servers/vevio.json new file mode 100644 index 00000000..d91e95bf --- /dev/null +++ b/plugin.video.alfa/servers/vevio.json @@ -0,0 +1,42 @@ +{ + "active": true, + "find_videos": { + "ignore_urls": [], + "patterns": [ + { + "pattern": "(vev.io/embed/[A-z0-9]+)", + "url": "https://\\1" + } + ] + }, + "free": true, + "id": "vevio", + "name": "vevio", + "settings": [ + { + "default": false, + "enabled": true, + "id": "black_list", + "label": "@60654", + "type": "bool", + "visible": true + }, + { + "default": 0, + "enabled": true, + "id": "favorites_servers_list", + "label": "@60655", + "lvalues": [ + "No", + "1", + "2", + "3", + "4", + "5" + ], + "type": "list", + "visible": false + } + ], + "thumbnail": "https://s8.postimg.cc/opp2c3p6d/vevio1.png" +} diff --git a/plugin.video.alfa/servers/vevio.py b/plugin.video.alfa/servers/vevio.py new file mode 100644 index 00000000..3f74f993 --- /dev/null +++ b/plugin.video.alfa/servers/vevio.py @@ -0,0 +1,29 @@ +# -*- coding: utf-8 -*- + +import urllib +from core import httptools +from core import scrapertools +from platformcode import logger, config + + +def test_video_exists(page_url): + logger.info("(page_url='%s')" % page_url) + data = httptools.downloadpage(page_url).data + if "File was deleted" in data or "Page Cannot Be Found" in data or "Video not found" in data: + return False, "[vevio] El archivo ha sido eliminado o no existe" + return True, "" + + +def get_video_url(page_url, premium=False, user="", password="", video_password=""): + logger.info("url=" + page_url) + video_urls = [] + post = {} + post = urllib.urlencode(post) + url = page_url + data = httptools.downloadpage("https://vev.io/api/serve/video/" + scrapertools.find_single_match(url, "embed/([A-z0-9]+)"), post=post).data + bloque = scrapertools.find_single_match(data, 'qualities":\{(.*?)\}') + matches = scrapertools.find_multiple_matches(bloque, '"([^"]+)":"([^"]+)') + for res, media_url in matches: + video_urls.append( + [scrapertools.get_filename_from_url(media_url)[-4:] + " (" + res + ") [vevio.me]", media_url]) + return video_urls From 2fa7e823771bf9b1ce00438ebf3fa32255407104 Mon Sep 17 00:00:00 2001 From: Intel1 <luisriverap@hotmail.com> Date: Fri, 7 Sep 2018 11:15:51 -0500 Subject: [PATCH 15/34] Eliminado mechanize Ya no se usa en los canales --- plugin.video.alfa/lib/mechanize/__init__.py | 211 -- plugin.video.alfa/lib/mechanize/_auth.py | 68 - .../lib/mechanize/_beautifulsoup.py | 1077 ------ .../lib/mechanize/_clientcookie.py | 1725 --------- plugin.video.alfa/lib/mechanize/_debug.py | 28 - .../lib/mechanize/_firefox3cookiejar.py | 248 -- plugin.video.alfa/lib/mechanize/_form.py | 3280 ----------------- plugin.video.alfa/lib/mechanize/_gzip.py | 105 - .../lib/mechanize/_headersutil.py | 241 -- plugin.video.alfa/lib/mechanize/_html.py | 629 ---- plugin.video.alfa/lib/mechanize/_http.py | 447 --- .../lib/mechanize/_lwpcookiejar.py | 185 - .../lib/mechanize/_markupbase.py | 393 -- plugin.video.alfa/lib/mechanize/_mechanize.py | 669 ---- .../lib/mechanize/_mozillacookiejar.py | 161 - .../lib/mechanize/_msiecookiejar.py | 388 -- plugin.video.alfa/lib/mechanize/_opener.py | 442 --- .../lib/mechanize/_pullparser.py | 391 -- plugin.video.alfa/lib/mechanize/_request.py | 40 - plugin.video.alfa/lib/mechanize/_response.py | 525 --- plugin.video.alfa/lib/mechanize/_rfc3986.py | 245 -- .../lib/mechanize/_sgmllib_copy.py | 559 --- .../lib/mechanize/_sockettimeout.py | 6 - plugin.video.alfa/lib/mechanize/_testcase.py | 162 - plugin.video.alfa/lib/mechanize/_urllib2.py | 50 - .../lib/mechanize/_urllib2_fork.py | 1414 ------- plugin.video.alfa/lib/mechanize/_useragent.py | 367 -- plugin.video.alfa/lib/mechanize/_util.py | 305 -- plugin.video.alfa/lib/mechanize/_version.py | 2 - 29 files changed, 14363 deletions(-) delete mode 100755 plugin.video.alfa/lib/mechanize/__init__.py delete mode 100755 plugin.video.alfa/lib/mechanize/_auth.py delete mode 100755 plugin.video.alfa/lib/mechanize/_beautifulsoup.py delete mode 100755 plugin.video.alfa/lib/mechanize/_clientcookie.py delete mode 100755 plugin.video.alfa/lib/mechanize/_debug.py delete mode 100755 plugin.video.alfa/lib/mechanize/_firefox3cookiejar.py delete mode 100755 plugin.video.alfa/lib/mechanize/_form.py delete mode 100755 plugin.video.alfa/lib/mechanize/_gzip.py delete mode 100755 plugin.video.alfa/lib/mechanize/_headersutil.py delete mode 100755 plugin.video.alfa/lib/mechanize/_html.py delete mode 100755 plugin.video.alfa/lib/mechanize/_http.py delete mode 100755 plugin.video.alfa/lib/mechanize/_lwpcookiejar.py delete mode 100755 plugin.video.alfa/lib/mechanize/_markupbase.py delete mode 100755 plugin.video.alfa/lib/mechanize/_mechanize.py delete mode 100755 plugin.video.alfa/lib/mechanize/_mozillacookiejar.py delete mode 100755 plugin.video.alfa/lib/mechanize/_msiecookiejar.py delete mode 100755 plugin.video.alfa/lib/mechanize/_opener.py delete mode 100755 plugin.video.alfa/lib/mechanize/_pullparser.py delete mode 100755 plugin.video.alfa/lib/mechanize/_request.py delete mode 100755 plugin.video.alfa/lib/mechanize/_response.py delete mode 100755 plugin.video.alfa/lib/mechanize/_rfc3986.py delete mode 100755 plugin.video.alfa/lib/mechanize/_sgmllib_copy.py delete mode 100755 plugin.video.alfa/lib/mechanize/_sockettimeout.py delete mode 100755 plugin.video.alfa/lib/mechanize/_testcase.py delete mode 100755 plugin.video.alfa/lib/mechanize/_urllib2.py delete mode 100755 plugin.video.alfa/lib/mechanize/_urllib2_fork.py delete mode 100755 plugin.video.alfa/lib/mechanize/_useragent.py delete mode 100755 plugin.video.alfa/lib/mechanize/_util.py delete mode 100755 plugin.video.alfa/lib/mechanize/_version.py diff --git a/plugin.video.alfa/lib/mechanize/__init__.py b/plugin.video.alfa/lib/mechanize/__init__.py deleted file mode 100755 index 43a3324a..00000000 --- a/plugin.video.alfa/lib/mechanize/__init__.py +++ /dev/null @@ -1,211 +0,0 @@ -__all__ = [ - 'AbstractBasicAuthHandler', - 'AbstractDigestAuthHandler', - 'BaseHandler', - 'Browser', - 'BrowserStateError', - 'CacheFTPHandler', - 'ContentTooShortError', - 'Cookie', - 'CookieJar', - 'CookiePolicy', - 'DefaultCookiePolicy', - 'DefaultFactory', - 'FTPHandler', - 'Factory', - 'FileCookieJar', - 'FileHandler', - 'FormNotFoundError', - 'FormsFactory', - 'HTTPBasicAuthHandler', - 'HTTPCookieProcessor', - 'HTTPDefaultErrorHandler', - 'HTTPDigestAuthHandler', - 'HTTPEquivProcessor', - 'HTTPError', - 'HTTPErrorProcessor', - 'HTTPHandler', - 'HTTPPasswordMgr', - 'HTTPPasswordMgrWithDefaultRealm', - 'HTTPProxyPasswordMgr', - 'HTTPRedirectDebugProcessor', - 'HTTPRedirectHandler', - 'HTTPRefererProcessor', - 'HTTPRefreshProcessor', - 'HTTPResponseDebugProcessor', - 'HTTPRobotRulesProcessor', - 'HTTPSClientCertMgr', - 'HeadParser', - 'History', - 'LWPCookieJar', - 'Link', - 'LinkNotFoundError', - 'LinksFactory', - 'LoadError', - 'MSIECookieJar', - 'MozillaCookieJar', - 'OpenerDirector', - 'OpenerFactory', - 'ParseError', - 'ProxyBasicAuthHandler', - 'ProxyDigestAuthHandler', - 'ProxyHandler', - 'Request', - 'RobotExclusionError', - 'RobustFactory', - 'RobustFormsFactory', - 'RobustLinksFactory', - 'RobustTitleFactory', - 'SeekableResponseOpener', - 'TitleFactory', - 'URLError', - 'USE_BARE_EXCEPT', - 'UnknownHandler', - 'UserAgent', - 'UserAgentBase', - 'XHTMLCompatibleHeadParser', - '__version__', - 'build_opener', - 'install_opener', - 'lwp_cookie_str', - 'make_response', - 'request_host', - 'response_seek_wrapper', # XXX deprecate in public interface? - 'seek_wrapped_response', # XXX should probably use this internally in place of response_seek_wrapper() - 'str2time', - 'urlopen', - 'urlretrieve', - 'urljoin', - - # ClientForm API - 'AmbiguityError', - 'ControlNotFoundError', - 'FormParser', - 'ItemCountError', - 'ItemNotFoundError', - 'LocateError', - 'Missing', - 'ParseFile', - 'ParseFileEx', - 'ParseResponse', - 'ParseResponseEx', - 'ParseString', - 'XHTMLCompatibleFormParser', - # deprecated - 'CheckboxControl', - 'Control', - 'FileControl', - 'HTMLForm', - 'HiddenControl', - 'IgnoreControl', - 'ImageControl', - 'IsindexControl', - 'Item', - 'Label', - 'ListControl', - 'PasswordControl', - 'RadioControl', - 'ScalarControl', - 'SelectControl', - 'SubmitButtonControl', - 'SubmitControl', - 'TextControl', - 'TextareaControl', - ] - -import logging -import sys - -from _version import __version__ - -# high-level stateful browser-style interface -from _mechanize import \ - Browser, History, \ - BrowserStateError, LinkNotFoundError, FormNotFoundError - -# configurable URL-opener interface -from _useragent import UserAgentBase, UserAgent -from _html import \ - Link, \ - Factory, DefaultFactory, RobustFactory, \ - FormsFactory, LinksFactory, TitleFactory, \ - RobustFormsFactory, RobustLinksFactory, RobustTitleFactory - -# urllib2 work-alike interface. This is a superset of the urllib2 interface. -from _urllib2 import * -import _urllib2 -if hasattr(_urllib2, "HTTPSHandler"): - __all__.append("HTTPSHandler") -del _urllib2 - -# misc -from _http import HeadParser -from _http import XHTMLCompatibleHeadParser -from _opener import ContentTooShortError, OpenerFactory, urlretrieve -from _response import \ - response_seek_wrapper, seek_wrapped_response, make_response -from _rfc3986 import urljoin -from _util import http2time as str2time - -# cookies -from _clientcookie import Cookie, CookiePolicy, DefaultCookiePolicy, \ - CookieJar, FileCookieJar, LoadError, request_host_lc as request_host, \ - effective_request_host -from _lwpcookiejar import LWPCookieJar, lwp_cookie_str -# 2.4 raises SyntaxError due to generator / try/finally use -if sys.version_info[:2] > (2,4): - try: - import sqlite3 - except ImportError: - pass - else: - from _firefox3cookiejar import Firefox3CookieJar -from _mozillacookiejar import MozillaCookieJar -from _msiecookiejar import MSIECookieJar - -# forms -from _form import ( - AmbiguityError, - ControlNotFoundError, - FormParser, - ItemCountError, - ItemNotFoundError, - LocateError, - Missing, - ParseError, - ParseFile, - ParseFileEx, - ParseResponse, - ParseResponseEx, - ParseString, - XHTMLCompatibleFormParser, - # deprecated - CheckboxControl, - Control, - FileControl, - HTMLForm, - HiddenControl, - IgnoreControl, - ImageControl, - IsindexControl, - Item, - Label, - ListControl, - PasswordControl, - RadioControl, - ScalarControl, - SelectControl, - SubmitButtonControl, - SubmitControl, - TextControl, - TextareaControl, - ) - -# If you hate the idea of turning bugs into warnings, do: -# import mechanize; mechanize.USE_BARE_EXCEPT = False -USE_BARE_EXCEPT = True - -logger = logging.getLogger("mechanize") -if logger.level is logging.NOTSET: - logger.setLevel(logging.CRITICAL) -del logger diff --git a/plugin.video.alfa/lib/mechanize/_auth.py b/plugin.video.alfa/lib/mechanize/_auth.py deleted file mode 100755 index 9fa7e8e3..00000000 --- a/plugin.video.alfa/lib/mechanize/_auth.py +++ /dev/null @@ -1,68 +0,0 @@ -"""HTTP Authentication and Proxy support. - - -Copyright 2006 John J. Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it under -the terms of the BSD or ZPL 2.1 licenses (see the file COPYING.txt -included with the distribution). - -""" - -from _urllib2_fork import HTTPPasswordMgr - - -# TODO: stop deriving from HTTPPasswordMgr -class HTTPProxyPasswordMgr(HTTPPasswordMgr): - # has default realm and host/port - def add_password(self, realm, uri, user, passwd): - # uri could be a single URI or a sequence - if uri is None or isinstance(uri, basestring): - uris = [uri] - else: - uris = uri - passwd_by_domain = self.passwd.setdefault(realm, {}) - for uri in uris: - for default_port in True, False: - reduced_uri = self.reduce_uri(uri, default_port) - passwd_by_domain[reduced_uri] = (user, passwd) - - def find_user_password(self, realm, authuri): - attempts = [(realm, authuri), (None, authuri)] - # bleh, want default realm to take precedence over default - # URI/authority, hence this outer loop - for default_uri in False, True: - for realm, authuri in attempts: - authinfo_by_domain = self.passwd.get(realm, {}) - for default_port in True, False: - reduced_authuri = self.reduce_uri(authuri, default_port) - for uri, authinfo in authinfo_by_domain.iteritems(): - if uri is None and not default_uri: - continue - if self.is_suburi(uri, reduced_authuri): - return authinfo - user, password = None, None - - if user is not None: - break - return user, password - - def reduce_uri(self, uri, default_port=True): - if uri is None: - return None - return HTTPPasswordMgr.reduce_uri(self, uri, default_port) - - def is_suburi(self, base, test): - if base is None: - # default to the proxy's host/port - hostport, path = test - base = (hostport, "/") - return HTTPPasswordMgr.is_suburi(self, base, test) - - -class HTTPSClientCertMgr(HTTPPasswordMgr): - # implementation inheritance: this is not a proper subclass - def add_key_cert(self, uri, key_file, cert_file): - self.add_password(None, uri, key_file, cert_file) - def find_key_cert(self, authuri): - return HTTPPasswordMgr.find_user_password(self, None, authuri) diff --git a/plugin.video.alfa/lib/mechanize/_beautifulsoup.py b/plugin.video.alfa/lib/mechanize/_beautifulsoup.py deleted file mode 100755 index 5ec6755a..00000000 --- a/plugin.video.alfa/lib/mechanize/_beautifulsoup.py +++ /dev/null @@ -1,1077 +0,0 @@ -"""Beautiful Soup -Elixir and Tonic -"The Screen-Scraper's Friend" -v2.1.1 -http://www.crummy.com/software/BeautifulSoup/ - -Beautiful Soup parses arbitrarily invalid XML- or HTML-like substance -into a tree representation. It provides methods and Pythonic idioms -that make it easy to search and modify the tree. - -A well-formed XML/HTML document will yield a well-formed data -structure. An ill-formed XML/HTML document will yield a -correspondingly ill-formed data structure. If your document is only -locally well-formed, you can use this library to find and process the -well-formed part of it. The BeautifulSoup class has heuristics for -obtaining a sensible parse tree in the face of common HTML errors. - -Beautiful Soup has no external dependencies. It works with Python 2.2 -and up. - -Beautiful Soup defines classes for four different parsing strategies: - - * BeautifulStoneSoup, for parsing XML, SGML, or your domain-specific - language that kind of looks like XML. - - * BeautifulSoup, for parsing run-of-the-mill HTML code, be it valid - or invalid. - - * ICantBelieveItsBeautifulSoup, for parsing valid but bizarre HTML - that trips up BeautifulSoup. - - * BeautifulSOAP, for making it easier to parse XML documents that use - lots of subelements containing a single string, where you'd prefer - they put that string into an attribute (such as SOAP messages). - -You can subclass BeautifulStoneSoup or BeautifulSoup to create a -parsing strategy specific to an XML schema or a particular bizarre -HTML document. Typically your subclass would just override -SELF_CLOSING_TAGS and/or NESTABLE_TAGS. -""" #" -from __future__ import generators - -__author__ = "Leonard Richardson (leonardr@segfault.org)" -__version__ = "2.1.1" -__date__ = "$Date: 2004/10/18 00:14:20 $" -__copyright__ = "Copyright (c) 2004-2005 Leonard Richardson" -__license__ = "PSF" - -from _sgmllib_copy import SGMLParser, SGMLParseError -import types -import re -import _sgmllib_copy as sgmllib - -class NullType(object): - - """Similar to NoneType with a corresponding singleton instance - 'Null' that, unlike None, accepts any message and returns itself. - - Examples: - >>> Null("send", "a", "message")("and one more", - ... "and what you get still") is Null - True - """ - - def __new__(cls): return Null - def __call__(self, *args, **kwargs): return Null -## def __getstate__(self, *args): return Null - def __getattr__(self, attr): return Null - def __getitem__(self, item): return Null - def __setattr__(self, attr, value): pass - def __setitem__(self, item, value): pass - def __len__(self): return 0 - # FIXME: is this a python bug? otherwise ``for x in Null: pass`` - # never terminates... - def __iter__(self): return iter([]) - def __contains__(self, item): return False - def __repr__(self): return "Null" -Null = object.__new__(NullType) - -class PageElement: - """Contains the navigational information for some part of the page - (either a tag or a piece of text)""" - - def setup(self, parent=Null, previous=Null): - """Sets up the initial relations between this element and - other elements.""" - self.parent = parent - self.previous = previous - self.next = Null - self.previousSibling = Null - self.nextSibling = Null - if self.parent and self.parent.contents: - self.previousSibling = self.parent.contents[-1] - self.previousSibling.nextSibling = self - - def findNext(self, name=None, attrs={}, text=None): - """Returns the first item that matches the given criteria and - appears after this Tag in the document.""" - return self._first(self.fetchNext, name, attrs, text) - firstNext = findNext - - def fetchNext(self, name=None, attrs={}, text=None, limit=None): - """Returns all items that match the given criteria and appear - before after Tag in the document.""" - return self._fetch(name, attrs, text, limit, self.nextGenerator) - - def findNextSibling(self, name=None, attrs={}, text=None): - """Returns the closest sibling to this Tag that matches the - given criteria and appears after this Tag in the document.""" - return self._first(self.fetchNextSiblings, name, attrs, text) - firstNextSibling = findNextSibling - - def fetchNextSiblings(self, name=None, attrs={}, text=None, limit=None): - """Returns the siblings of this Tag that match the given - criteria and appear after this Tag in the document.""" - return self._fetch(name, attrs, text, limit, self.nextSiblingGenerator) - - def findPrevious(self, name=None, attrs={}, text=None): - """Returns the first item that matches the given criteria and - appears before this Tag in the document.""" - return self._first(self.fetchPrevious, name, attrs, text) - - def fetchPrevious(self, name=None, attrs={}, text=None, limit=None): - """Returns all items that match the given criteria and appear - before this Tag in the document.""" - return self._fetch(name, attrs, text, limit, self.previousGenerator) - firstPrevious = findPrevious - - def findPreviousSibling(self, name=None, attrs={}, text=None): - """Returns the closest sibling to this Tag that matches the - given criteria and appears before this Tag in the document.""" - return self._first(self.fetchPreviousSiblings, name, attrs, text) - firstPreviousSibling = findPreviousSibling - - def fetchPreviousSiblings(self, name=None, attrs={}, text=None, - limit=None): - """Returns the siblings of this Tag that match the given - criteria and appear before this Tag in the document.""" - return self._fetch(name, attrs, text, limit, - self.previousSiblingGenerator) - - def findParent(self, name=None, attrs={}): - """Returns the closest parent of this Tag that matches the given - criteria.""" - r = Null - l = self.fetchParents(name, attrs, 1) - if l: - r = l[0] - return r - firstParent = findParent - - def fetchParents(self, name=None, attrs={}, limit=None): - """Returns the parents of this Tag that match the given - criteria.""" - return self._fetch(name, attrs, None, limit, self.parentGenerator) - - #These methods do the real heavy lifting. - - def _first(self, method, name, attrs, text): - r = Null - l = method(name, attrs, text, 1) - if l: - r = l[0] - return r - - def _fetch(self, name, attrs, text, limit, generator): - "Iterates over a generator looking for things that match." - if not hasattr(attrs, 'items'): - attrs = {'class' : attrs} - - results = [] - g = generator() - while True: - try: - i = g.next() - except StopIteration: - break - found = None - if isinstance(i, Tag): - if not text: - if not name or self._matches(i, name): - match = True - for attr, matchAgainst in attrs.items(): - check = i.get(attr) - if not self._matches(check, matchAgainst): - match = False - break - if match: - found = i - elif text: - if self._matches(i, text): - found = i - if found: - results.append(found) - if limit and len(results) >= limit: - break - return results - - #Generators that can be used to navigate starting from both - #NavigableTexts and Tags. - def nextGenerator(self): - i = self - while i: - i = i.next - yield i - - def nextSiblingGenerator(self): - i = self - while i: - i = i.nextSibling - yield i - - def previousGenerator(self): - i = self - while i: - i = i.previous - yield i - - def previousSiblingGenerator(self): - i = self - while i: - i = i.previousSibling - yield i - - def parentGenerator(self): - i = self - while i: - i = i.parent - yield i - - def _matches(self, chunk, howToMatch): - #print 'looking for %s in %s' % (howToMatch, chunk) - # - # If given a list of items, return true if the list contains a - # text element that matches. - if isList(chunk) and not isinstance(chunk, Tag): - for tag in chunk: - if isinstance(tag, NavigableText) and self._matches(tag, howToMatch): - return True - return False - if callable(howToMatch): - return howToMatch(chunk) - if isinstance(chunk, Tag): - #Custom match methods take the tag as an argument, but all other - #ways of matching match the tag name as a string - chunk = chunk.name - #Now we know that chunk is a string - if not isinstance(chunk, basestring): - chunk = str(chunk) - if hasattr(howToMatch, 'match'): - # It's a regexp object. - return howToMatch.search(chunk) - if isList(howToMatch): - return chunk in howToMatch - if hasattr(howToMatch, 'items'): - return howToMatch.has_key(chunk) - #It's just a string - return str(howToMatch) == chunk - -class NavigableText(PageElement): - - def __getattr__(self, attr): - "For backwards compatibility, text.string gives you text" - if attr == 'string': - return self - else: - raise AttributeError, "'%s' object has no attribute '%s'" % (self.__class__.__name__, attr) - -class NavigableString(str, NavigableText): - pass - -class NavigableUnicodeString(unicode, NavigableText): - pass - -class Tag(PageElement): - - """Represents a found HTML tag with its attributes and contents.""" - - def __init__(self, name, attrs=None, parent=Null, previous=Null): - "Basic constructor." - self.name = name - if attrs == None: - attrs = [] - self.attrs = attrs - self.contents = [] - self.setup(parent, previous) - self.hidden = False - - def get(self, key, default=None): - """Returns the value of the 'key' attribute for the tag, or - the value given for 'default' if it doesn't have that - attribute.""" - return self._getAttrMap().get(key, default) - - def __getitem__(self, key): - """tag[key] returns the value of the 'key' attribute for the tag, - and throws an exception if it's not there.""" - return self._getAttrMap()[key] - - def __iter__(self): - "Iterating over a tag iterates over its contents." - return iter(self.contents) - - def __len__(self): - "The length of a tag is the length of its list of contents." - return len(self.contents) - - def __contains__(self, x): - return x in self.contents - - def __nonzero__(self): - "A tag is non-None even if it has no contents." - return True - - def __setitem__(self, key, value): - """Setting tag[key] sets the value of the 'key' attribute for the - tag.""" - self._getAttrMap() - self.attrMap[key] = value - found = False - for i in range(0, len(self.attrs)): - if self.attrs[i][0] == key: - self.attrs[i] = (key, value) - found = True - if not found: - self.attrs.append((key, value)) - self._getAttrMap()[key] = value - - def __delitem__(self, key): - "Deleting tag[key] deletes all 'key' attributes for the tag." - for item in self.attrs: - if item[0] == key: - self.attrs.remove(item) - #We don't break because bad HTML can define the same - #attribute multiple times. - self._getAttrMap() - if self.attrMap.has_key(key): - del self.attrMap[key] - - def __call__(self, *args, **kwargs): - """Calling a tag like a function is the same as calling its - fetch() method. Eg. tag('a') returns a list of all the A tags - found within this tag.""" - return apply(self.fetch, args, kwargs) - - def __getattr__(self, tag): - if len(tag) > 3 and tag.rfind('Tag') == len(tag)-3: - return self.first(tag[:-3]) - elif tag.find('__') != 0: - return self.first(tag) - - def __eq__(self, other): - """Returns true iff this tag has the same name, the same attributes, - and the same contents (recursively) as the given tag. - - NOTE: right now this will return false if two tags have the - same attributes in a different order. Should this be fixed?""" - if not hasattr(other, 'name') or not hasattr(other, 'attrs') or not hasattr(other, 'contents') or self.name != other.name or self.attrs != other.attrs or len(self) != len(other): - return False - for i in range(0, len(self.contents)): - if self.contents[i] != other.contents[i]: - return False - return True - - def __ne__(self, other): - """Returns true iff this tag is not identical to the other tag, - as defined in __eq__.""" - return not self == other - - def __repr__(self): - """Renders this tag as a string.""" - return str(self) - - def __unicode__(self): - return self.__str__(1) - - def __str__(self, needUnicode=None, showStructureIndent=None): - """Returns a string or Unicode representation of this tag and - its contents. - - NOTE: since Python's HTML parser consumes whitespace, this - method is not certain to reproduce the whitespace present in - the original string.""" - - attrs = [] - if self.attrs: - for key, val in self.attrs: - attrs.append('%s="%s"' % (key, val)) - close = '' - closeTag = '' - if self.isSelfClosing(): - close = ' /' - else: - closeTag = '</%s>' % self.name - indentIncrement = None - if showStructureIndent != None: - indentIncrement = showStructureIndent - if not self.hidden: - indentIncrement += 1 - contents = self.renderContents(indentIncrement, needUnicode=needUnicode) - if showStructureIndent: - space = '\n%s' % (' ' * showStructureIndent) - if self.hidden: - s = contents - else: - s = [] - attributeString = '' - if attrs: - attributeString = ' ' + ' '.join(attrs) - if showStructureIndent: - s.append(space) - s.append('<%s%s%s>' % (self.name, attributeString, close)) - s.append(contents) - if closeTag and showStructureIndent != None: - s.append(space) - s.append(closeTag) - s = ''.join(s) - isUnicode = type(s) == types.UnicodeType - if needUnicode and not isUnicode: - s = unicode(s) - elif isUnicode and needUnicode==False: - s = str(s) - return s - - def prettify(self, needUnicode=None): - return self.__str__(needUnicode, showStructureIndent=True) - - def renderContents(self, showStructureIndent=None, needUnicode=None): - """Renders the contents of this tag as a (possibly Unicode) - string.""" - s=[] - for c in self: - text = None - if isinstance(c, NavigableUnicodeString) or type(c) == types.UnicodeType: - text = unicode(c) - elif isinstance(c, Tag): - s.append(c.__str__(needUnicode, showStructureIndent)) - elif needUnicode: - text = unicode(c) - else: - text = str(c) - if text: - if showStructureIndent != None: - if text[-1] == '\n': - text = text[:-1] - s.append(text) - return ''.join(s) - - #Soup methods - - def firstText(self, text, recursive=True): - """Convenience method to retrieve the first piece of text matching the - given criteria. 'text' can be a string, a regular expression object, - a callable that takes a string and returns whether or not the - string 'matches', etc.""" - return self.first(recursive=recursive, text=text) - - def fetchText(self, text, recursive=True, limit=None): - """Convenience method to retrieve all pieces of text matching the - given criteria. 'text' can be a string, a regular expression object, - a callable that takes a string and returns whether or not the - string 'matches', etc.""" - return self.fetch(recursive=recursive, text=text, limit=limit) - - def first(self, name=None, attrs={}, recursive=True, text=None): - """Return only the first child of this - Tag matching the given criteria.""" - r = Null - l = self.fetch(name, attrs, recursive, text, 1) - if l: - r = l[0] - return r - findChild = first - - def fetch(self, name=None, attrs={}, recursive=True, text=None, - limit=None): - """Extracts a list of Tag objects that match the given - criteria. You can specify the name of the Tag and any - attributes you want the Tag to have. - - The value of a key-value pair in the 'attrs' map can be a - string, a list of strings, a regular expression object, or a - callable that takes a string and returns whether or not the - string matches for some custom definition of 'matches'. The - same is true of the tag name.""" - generator = self.recursiveChildGenerator - if not recursive: - generator = self.childGenerator - return self._fetch(name, attrs, text, limit, generator) - fetchChildren = fetch - - #Utility methods - - def isSelfClosing(self): - """Returns true iff this is a self-closing tag as defined in the HTML - standard. - - TODO: This is specific to BeautifulSoup and its subclasses, but it's - used by __str__""" - return self.name in BeautifulSoup.SELF_CLOSING_TAGS - - def append(self, tag): - """Appends the given tag to the contents of this tag.""" - self.contents.append(tag) - - #Private methods - - def _getAttrMap(self): - """Initializes a map representation of this tag's attributes, - if not already initialized.""" - if not getattr(self, 'attrMap'): - self.attrMap = {} - for (key, value) in self.attrs: - self.attrMap[key] = value - return self.attrMap - - #Generator methods - def childGenerator(self): - for i in range(0, len(self.contents)): - yield self.contents[i] - raise StopIteration - - def recursiveChildGenerator(self): - stack = [(self, 0)] - while stack: - tag, start = stack.pop() - if isinstance(tag, Tag): - for i in range(start, len(tag.contents)): - a = tag.contents[i] - yield a - if isinstance(a, Tag) and tag.contents: - if i < len(tag.contents) - 1: - stack.append((tag, i+1)) - stack.append((a, 0)) - break - raise StopIteration - - -def isList(l): - """Convenience method that works with all 2.x versions of Python - to determine whether or not something is listlike.""" - return hasattr(l, '__iter__') \ - or (type(l) in (types.ListType, types.TupleType)) - -def buildTagMap(default, *args): - """Turns a list of maps, lists, or scalars into a single map. - Used to build the SELF_CLOSING_TAGS and NESTABLE_TAGS maps out - of lists and partial maps.""" - built = {} - for portion in args: - if hasattr(portion, 'items'): - #It's a map. Merge it. - for k,v in portion.items(): - built[k] = v - elif isList(portion): - #It's a list. Map each item to the default. - for k in portion: - built[k] = default - else: - #It's a scalar. Map it to the default. - built[portion] = default - return built - -class BeautifulStoneSoup(Tag, SGMLParser): - - """This class contains the basic parser and fetch code. It defines - a parser that knows nothing about tag behavior except for the - following: - - You can't close a tag without closing all the tags it encloses. - That is, "<foo><bar></foo>" actually means - "<foo><bar></bar></foo>". - - [Another possible explanation is "<foo><bar /></foo>", but since - this class defines no SELF_CLOSING_TAGS, it will never use that - explanation.] - - This class is useful for parsing XML or made-up markup languages, - or when BeautifulSoup makes an assumption counter to what you were - expecting.""" - - SELF_CLOSING_TAGS = {} - NESTABLE_TAGS = {} - RESET_NESTING_TAGS = {} - QUOTE_TAGS = {} - - #As a public service we will by default silently replace MS smart quotes - #and similar characters with their HTML or ASCII equivalents. - MS_CHARS = { '\x80' : '€', - '\x81' : ' ', - '\x82' : '‚', - '\x83' : 'ƒ', - '\x84' : '„', - '\x85' : '…', - '\x86' : '†', - '\x87' : '‡', - '\x88' : '⁁', - '\x89' : '%', - '\x8A' : 'Š', - '\x8B' : '<', - '\x8C' : 'Œ', - '\x8D' : '?', - '\x8E' : 'Z', - '\x8F' : '?', - '\x90' : '?', - '\x91' : '‘', - '\x92' : '’', - '\x93' : '“', - '\x94' : '”', - '\x95' : '•', - '\x96' : '–', - '\x97' : '—', - '\x98' : '˜', - '\x99' : '™', - '\x9a' : 'š', - '\x9b' : '>', - '\x9c' : 'œ', - '\x9d' : '?', - '\x9e' : 'z', - '\x9f' : 'Ÿ',} - - PARSER_MASSAGE = [(re.compile('(<[^<>]*)/>'), - lambda(x):x.group(1) + ' />'), - (re.compile('<!\s+([^<>]*)>'), - lambda(x):'<!' + x.group(1) + '>'), - (re.compile("([\x80-\x9f])"), - lambda(x): BeautifulStoneSoup.MS_CHARS.get(x.group(1))) - ] - - ROOT_TAG_NAME = '[document]' - - def __init__(self, text=None, avoidParserProblems=True, - initialTextIsEverything=True): - """Initialize this as the 'root tag' and feed in any text to - the parser. - - NOTE about avoidParserProblems: sgmllib will process most bad - HTML, and BeautifulSoup has tricks for dealing with some HTML - that kills sgmllib, but Beautiful Soup can nonetheless choke - or lose data if your data uses self-closing tags or - declarations incorrectly. By default, Beautiful Soup sanitizes - its input to avoid the vast majority of these problems. The - problems are relatively rare, even in bad HTML, so feel free - to pass in False to avoidParserProblems if they don't apply to - you, and you'll get better performance. The only reason I have - this turned on by default is so I don't get so many tech - support questions. - - The two most common instances of invalid HTML that will choke - sgmllib are fixed by the default parser massage techniques: - - <br/> (No space between name of closing tag and tag close) - <! --Comment--> (Extraneous whitespace in declaration) - - You can pass in a custom list of (RE object, replace method) - tuples to get Beautiful Soup to scrub your input the way you - want.""" - Tag.__init__(self, self.ROOT_TAG_NAME) - if avoidParserProblems \ - and not isList(avoidParserProblems): - avoidParserProblems = self.PARSER_MASSAGE - self.avoidParserProblems = avoidParserProblems - SGMLParser.__init__(self) - self.quoteStack = [] - self.hidden = 1 - self.reset() - if hasattr(text, 'read'): - #It's a file-type object. - text = text.read() - if text: - self.feed(text) - if initialTextIsEverything: - self.done() - - def __getattr__(self, methodName): - """This method routes method call requests to either the SGMLParser - superclass or the Tag superclass, depending on the method name.""" - if methodName.find('start_') == 0 or methodName.find('end_') == 0 \ - or methodName.find('do_') == 0: - return SGMLParser.__getattr__(self, methodName) - elif methodName.find('__') != 0: - return Tag.__getattr__(self, methodName) - else: - raise AttributeError - - def feed(self, text): - if self.avoidParserProblems: - for fix, m in self.avoidParserProblems: - text = fix.sub(m, text) - SGMLParser.feed(self, text) - - def done(self): - """Called when you're done parsing, so that the unclosed tags can be - correctly processed.""" - self.endData() #NEW - while self.currentTag.name != self.ROOT_TAG_NAME: - self.popTag() - - def reset(self): - SGMLParser.reset(self) - self.currentData = [] - self.currentTag = None - self.tagStack = [] - self.pushTag(self) - - def popTag(self): - tag = self.tagStack.pop() - # Tags with just one string-owning child get the child as a - # 'string' property, so that soup.tag.string is shorthand for - # soup.tag.contents[0] - if len(self.currentTag.contents) == 1 and \ - isinstance(self.currentTag.contents[0], NavigableText): - self.currentTag.string = self.currentTag.contents[0] - - #print "Pop", tag.name - if self.tagStack: - self.currentTag = self.tagStack[-1] - return self.currentTag - - def pushTag(self, tag): - #print "Push", tag.name - if self.currentTag: - self.currentTag.append(tag) - self.tagStack.append(tag) - self.currentTag = self.tagStack[-1] - - def endData(self): - currentData = ''.join(self.currentData) - if currentData: - if not currentData.strip(): - if '\n' in currentData: - currentData = '\n' - else: - currentData = ' ' - c = NavigableString - if type(currentData) == types.UnicodeType: - c = NavigableUnicodeString - o = c(currentData) - o.setup(self.currentTag, self.previous) - if self.previous: - self.previous.next = o - self.previous = o - self.currentTag.contents.append(o) - self.currentData = [] - - def _popToTag(self, name, inclusivePop=True): - """Pops the tag stack up to and including the most recent - instance of the given tag. If inclusivePop is false, pops the tag - stack up to but *not* including the most recent instqance of - the given tag.""" - if name == self.ROOT_TAG_NAME: - return - - numPops = 0 - mostRecentTag = None - for i in range(len(self.tagStack)-1, 0, -1): - if name == self.tagStack[i].name: - numPops = len(self.tagStack)-i - break - if not inclusivePop: - numPops = numPops - 1 - - for i in range(0, numPops): - mostRecentTag = self.popTag() - return mostRecentTag - - def _smartPop(self, name): - - """We need to pop up to the previous tag of this type, unless - one of this tag's nesting reset triggers comes between this - tag and the previous tag of this type, OR unless this tag is a - generic nesting trigger and another generic nesting trigger - comes between this tag and the previous tag of this type. - - Examples: - <p>Foo<b>Bar<p> should pop to 'p', not 'b'. - <p>Foo<table>Bar<p> should pop to 'table', not 'p'. - <p>Foo<table><tr>Bar<p> should pop to 'tr', not 'p'. - <p>Foo<b>Bar<p> should pop to 'p', not 'b'. - - <li><ul><li> *<li>* should pop to 'ul', not the first 'li'. - <tr><table><tr> *<tr>* should pop to 'table', not the first 'tr' - <td><tr><td> *<td>* should pop to 'tr', not the first 'td' - """ - - nestingResetTriggers = self.NESTABLE_TAGS.get(name) - isNestable = nestingResetTriggers != None - isResetNesting = self.RESET_NESTING_TAGS.has_key(name) - popTo = None - inclusive = True - for i in range(len(self.tagStack)-1, 0, -1): - p = self.tagStack[i] - if (not p or p.name == name) and not isNestable: - #Non-nestable tags get popped to the top or to their - #last occurance. - popTo = name - break - if (nestingResetTriggers != None - and p.name in nestingResetTriggers) \ - or (nestingResetTriggers == None and isResetNesting - and self.RESET_NESTING_TAGS.has_key(p.name)): - - #If we encounter one of the nesting reset triggers - #peculiar to this tag, or we encounter another tag - #that causes nesting to reset, pop up to but not - #including that tag. - - popTo = p.name - inclusive = False - break - p = p.parent - if popTo: - self._popToTag(popTo, inclusive) - - def unknown_starttag(self, name, attrs, selfClosing=0): - #print "Start tag %s" % name - if self.quoteStack: - #This is not a real tag. - #print "<%s> is not real!" % name - attrs = ''.join(map(lambda(x, y): ' %s="%s"' % (x, y), attrs)) - self.handle_data('<%s%s>' % (name, attrs)) - return - self.endData() - if not name in self.SELF_CLOSING_TAGS and not selfClosing: - self._smartPop(name) - tag = Tag(name, attrs, self.currentTag, self.previous) - if self.previous: - self.previous.next = tag - self.previous = tag - self.pushTag(tag) - if selfClosing or name in self.SELF_CLOSING_TAGS: - self.popTag() - if name in self.QUOTE_TAGS: - #print "Beginning quote (%s)" % name - self.quoteStack.append(name) - self.literal = 1 - - def unknown_endtag(self, name): - if self.quoteStack and self.quoteStack[-1] != name: - #This is not a real end tag. - #print "</%s> is not real!" % name - self.handle_data('</%s>' % name) - return - self.endData() - self._popToTag(name) - if self.quoteStack and self.quoteStack[-1] == name: - self.quoteStack.pop() - self.literal = (len(self.quoteStack) > 0) - - def handle_data(self, data): - self.currentData.append(data) - - def handle_pi(self, text): - "Propagate processing instructions right through." - self.handle_data("<?%s>" % text) - - def handle_comment(self, text): - "Propagate comments right through." - self.handle_data("<!--%s-->" % text) - - def handle_charref(self, ref): - "Propagate char refs right through." - self.handle_data('&#%s;' % ref) - - def handle_entityref(self, ref): - "Propagate entity refs right through." - self.handle_data('&%s;' % ref) - - def handle_decl(self, data): - "Propagate DOCTYPEs and the like right through." - self.handle_data('<!%s>' % data) - - def parse_declaration(self, i): - """Treat a bogus SGML declaration as raw data. Treat a CDATA - declaration as regular data.""" - j = None - if self.rawdata[i:i+9] == '<![CDATA[': - k = self.rawdata.find(']]>', i) - if k == -1: - k = len(self.rawdata) - self.handle_data(self.rawdata[i+9:k]) - j = k+3 - else: - try: - j = SGMLParser.parse_declaration(self, i) - except SGMLParseError: - toHandle = self.rawdata[i:] - self.handle_data(toHandle) - j = i + len(toHandle) - return j - -class BeautifulSoup(BeautifulStoneSoup): - - """This parser knows the following facts about HTML: - - * Some tags have no closing tag and should be interpreted as being - closed as soon as they are encountered. - - * The text inside some tags (ie. 'script') may contain tags which - are not really part of the document and which should be parsed - as text, not tags. If you want to parse the text as tags, you can - always fetch it and parse it explicitly. - - * Tag nesting rules: - - Most tags can't be nested at all. For instance, the occurance of - a <p> tag should implicitly close the previous <p> tag. - - <p>Para1<p>Para2 - should be transformed into: - <p>Para1</p><p>Para2 - - Some tags can be nested arbitrarily. For instance, the occurance - of a <blockquote> tag should _not_ implicitly close the previous - <blockquote> tag. - - Alice said: <blockquote>Bob said: <blockquote>Blah - should NOT be transformed into: - Alice said: <blockquote>Bob said: </blockquote><blockquote>Blah - - Some tags can be nested, but the nesting is reset by the - interposition of other tags. For instance, a <tr> tag should - implicitly close the previous <tr> tag within the same <table>, - but not close a <tr> tag in another table. - - <table><tr>Blah<tr>Blah - should be transformed into: - <table><tr>Blah</tr><tr>Blah - but, - <tr>Blah<table><tr>Blah - should NOT be transformed into - <tr>Blah<table></tr><tr>Blah - - Differing assumptions about tag nesting rules are a major source - of problems with the BeautifulSoup class. If BeautifulSoup is not - treating as nestable a tag your page author treats as nestable, - try ICantBelieveItsBeautifulSoup before writing your own - subclass.""" - - SELF_CLOSING_TAGS = buildTagMap(None, ['br' , 'hr', 'input', 'img', 'meta', - 'spacer', 'link', 'frame', 'base']) - - QUOTE_TAGS = {'script': None} - - #According to the HTML standard, each of these inline tags can - #contain another tag of the same type. Furthermore, it's common - #to actually use these tags this way. - NESTABLE_INLINE_TAGS = ['span', 'font', 'q', 'object', 'bdo', 'sub', 'sup', - 'center'] - - #According to the HTML standard, these block tags can contain - #another tag of the same type. Furthermore, it's common - #to actually use these tags this way. - NESTABLE_BLOCK_TAGS = ['blockquote', 'div', 'fieldset', 'ins', 'del'] - - #Lists can contain other lists, but there are restrictions. - NESTABLE_LIST_TAGS = { 'ol' : [], - 'ul' : [], - 'li' : ['ul', 'ol'], - 'dl' : [], - 'dd' : ['dl'], - 'dt' : ['dl'] } - - #Tables can contain other tables, but there are restrictions. - NESTABLE_TABLE_TAGS = {'table' : [], - 'tr' : ['table', 'tbody', 'tfoot', 'thead'], - 'td' : ['tr'], - 'th' : ['tr'], - } - - NON_NESTABLE_BLOCK_TAGS = ['address', 'form', 'p', 'pre'] - - #If one of these tags is encountered, all tags up to the next tag of - #this type are popped. - RESET_NESTING_TAGS = buildTagMap(None, NESTABLE_BLOCK_TAGS, 'noscript', - NON_NESTABLE_BLOCK_TAGS, - NESTABLE_LIST_TAGS, - NESTABLE_TABLE_TAGS) - - NESTABLE_TAGS = buildTagMap([], NESTABLE_INLINE_TAGS, NESTABLE_BLOCK_TAGS, - NESTABLE_LIST_TAGS, NESTABLE_TABLE_TAGS) - -class ICantBelieveItsBeautifulSoup(BeautifulSoup): - - """The BeautifulSoup class is oriented towards skipping over - common HTML errors like unclosed tags. However, sometimes it makes - errors of its own. For instance, consider this fragment: - - <b>Foo<b>Bar</b></b> - - This is perfectly valid (if bizarre) HTML. However, the - BeautifulSoup class will implicitly close the first b tag when it - encounters the second 'b'. It will think the author wrote - "<b>Foo<b>Bar", and didn't close the first 'b' tag, because - there's no real-world reason to bold something that's already - bold. When it encounters '</b></b>' it will close two more 'b' - tags, for a grand total of three tags closed instead of two. This - can throw off the rest of your document structure. The same is - true of a number of other tags, listed below. - - It's much more common for someone to forget to close (eg.) a 'b' - tag than to actually use nested 'b' tags, and the BeautifulSoup - class handles the common case. This class handles the - not-co-common case: where you can't believe someone wrote what - they did, but it's valid HTML and BeautifulSoup screwed up by - assuming it wouldn't be. - - If this doesn't do what you need, try subclassing this class or - BeautifulSoup, and providing your own list of NESTABLE_TAGS.""" - - I_CANT_BELIEVE_THEYRE_NESTABLE_INLINE_TAGS = \ - ['em', 'big', 'i', 'small', 'tt', 'abbr', 'acronym', 'strong', - 'cite', 'code', 'dfn', 'kbd', 'samp', 'strong', 'var', 'b', - 'big'] - - I_CANT_BELIEVE_THEYRE_NESTABLE_BLOCK_TAGS = ['noscript'] - - NESTABLE_TAGS = buildTagMap([], BeautifulSoup.NESTABLE_TAGS, - I_CANT_BELIEVE_THEYRE_NESTABLE_BLOCK_TAGS, - I_CANT_BELIEVE_THEYRE_NESTABLE_INLINE_TAGS) - -class BeautifulSOAP(BeautifulStoneSoup): - """This class will push a tag with only a single string child into - the tag's parent as an attribute. The attribute's name is the tag - name, and the value is the string child. An example should give - the flavor of the change: - - <foo><bar>baz</bar></foo> - => - <foo bar="baz"><bar>baz</bar></foo> - - You can then access fooTag['bar'] instead of fooTag.barTag.string. - - This is, of course, useful for scraping structures that tend to - use subelements instead of attributes, such as SOAP messages. Note - that it modifies its input, so don't print the modified version - out. - - I'm not sure how many people really want to use this class; let me - know if you do. Mainly I like the name.""" - - def popTag(self): - if len(self.tagStack) > 1: - tag = self.tagStack[-1] - parent = self.tagStack[-2] - parent._getAttrMap() - if (isinstance(tag, Tag) and len(tag.contents) == 1 and - isinstance(tag.contents[0], NavigableText) and - not parent.attrMap.has_key(tag.name)): - parent[tag.name] = tag.contents[0] - BeautifulStoneSoup.popTag(self) - -#Enterprise class names! It has come to our attention that some people -#think the names of the Beautiful Soup parser classes are too silly -#and "unprofessional" for use in enterprise screen-scraping. We feel -#your pain! For such-minded folk, the Beautiful Soup Consortium And -#All-Night Kosher Bakery recommends renaming this file to -#"RobustParser.py" (or, in cases of extreme enterprisitude, -#"RobustParserBeanInterface.class") and using the following -#enterprise-friendly class aliases: -class RobustXMLParser(BeautifulStoneSoup): - pass -class RobustHTMLParser(BeautifulSoup): - pass -class RobustWackAssHTMLParser(ICantBelieveItsBeautifulSoup): - pass -class SimplifyingSOAPParser(BeautifulSOAP): - pass - -### - - -#By default, act as an HTML pretty-printer. -if __name__ == '__main__': - import sys - soup = BeautifulStoneSoup(sys.stdin.read()) - print soup.prettify() diff --git a/plugin.video.alfa/lib/mechanize/_clientcookie.py b/plugin.video.alfa/lib/mechanize/_clientcookie.py deleted file mode 100755 index d29feaae..00000000 --- a/plugin.video.alfa/lib/mechanize/_clientcookie.py +++ /dev/null @@ -1,1725 +0,0 @@ -"""HTTP cookie handling for web clients. - -This module originally developed from my port of Gisle Aas' Perl module -HTTP::Cookies, from the libwww-perl library. - -Docstrings, comments and debug strings in this code refer to the -attributes of the HTTP cookie system as cookie-attributes, to distinguish -them clearly from Python attributes. - - CookieJar____ - / \ \ - FileCookieJar \ \ - / | \ \ \ - MozillaCookieJar | LWPCookieJar \ \ - | | \ - | ---MSIEBase | \ - | / | | \ - | / MSIEDBCookieJar BSDDBCookieJar - |/ - MSIECookieJar - -Comments to John J Lee <jjl@pobox.com>. - - -Copyright 2002-2006 John J Lee <jjl@pobox.com> -Copyright 1997-1999 Gisle Aas (original libwww-perl code) -Copyright 2002-2003 Johnny Lee (original MSIE Perl code) - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -import sys, re, copy, time, urllib, types, logging -try: - import threading - _threading = threading; del threading -except ImportError: - import dummy_threading - _threading = dummy_threading; del dummy_threading - -MISSING_FILENAME_TEXT = ("a filename was not supplied (nor was the CookieJar " - "instance initialised with one)") -DEFAULT_HTTP_PORT = "80" - -from _headersutil import split_header_words, parse_ns_headers -from _util import isstringlike -import _rfc3986 - -debug = logging.getLogger("mechanize.cookies").debug - - -def reraise_unmasked_exceptions(unmasked=()): - # There are a few catch-all except: statements in this module, for - # catching input that's bad in unexpected ways. - # This function re-raises some exceptions we don't want to trap. - import mechanize, warnings - if not mechanize.USE_BARE_EXCEPT: - raise - unmasked = unmasked + (KeyboardInterrupt, SystemExit, MemoryError) - etype = sys.exc_info()[0] - if issubclass(etype, unmasked): - raise - # swallowed an exception - import traceback, StringIO - f = StringIO.StringIO() - traceback.print_exc(None, f) - msg = f.getvalue() - warnings.warn("mechanize bug!\n%s" % msg, stacklevel=2) - - -IPV4_RE = re.compile(r"\.\d+$") -def is_HDN(text): - """Return True if text is a host domain name.""" - # XXX - # This may well be wrong. Which RFC is HDN defined in, if any (for - # the purposes of RFC 2965)? - # For the current implementation, what about IPv6? Remember to look - # at other uses of IPV4_RE also, if change this. - return not (IPV4_RE.search(text) or - text == "" or - text[0] == "." or text[-1] == ".") - -def domain_match(A, B): - """Return True if domain A domain-matches domain B, according to RFC 2965. - - A and B may be host domain names or IP addresses. - - RFC 2965, section 1: - - Host names can be specified either as an IP address or a HDN string. - Sometimes we compare one host name with another. (Such comparisons SHALL - be case-insensitive.) Host A's name domain-matches host B's if - - * their host name strings string-compare equal; or - - * A is a HDN string and has the form NB, where N is a non-empty - name string, B has the form .B', and B' is a HDN string. (So, - x.y.com domain-matches .Y.com but not Y.com.) - - Note that domain-match is not a commutative operation: a.b.c.com - domain-matches .c.com, but not the reverse. - - """ - # Note that, if A or B are IP addresses, the only relevant part of the - # definition of the domain-match algorithm is the direct string-compare. - A = A.lower() - B = B.lower() - if A == B: - return True - if not is_HDN(A): - return False - i = A.rfind(B) - has_form_nb = not (i == -1 or i == 0) - return ( - has_form_nb and - B.startswith(".") and - is_HDN(B[1:]) - ) - -def liberal_is_HDN(text): - """Return True if text is a sort-of-like a host domain name. - - For accepting/blocking domains. - - """ - return not IPV4_RE.search(text) - -def user_domain_match(A, B): - """For blocking/accepting domains. - - A and B may be host domain names or IP addresses. - - """ - A = A.lower() - B = B.lower() - if not (liberal_is_HDN(A) and liberal_is_HDN(B)): - if A == B: - # equal IP addresses - return True - return False - initial_dot = B.startswith(".") - if initial_dot and A.endswith(B): - return True - if not initial_dot and A == B: - return True - return False - -cut_port_re = re.compile(r":\d+$") -def request_host(request): - """Return request-host, as defined by RFC 2965. - - Variation from RFC: returned value is lowercased, for convenient - comparison. - - """ - url = request.get_full_url() - host = _rfc3986.urlsplit(url)[1] - if host is None: - host = request.get_header("Host", "") - # remove port, if present - return cut_port_re.sub("", host, 1) - -def request_host_lc(request): - return request_host(request).lower() - -def eff_request_host(request): - """Return a tuple (request-host, effective request-host name).""" - erhn = req_host = request_host(request) - if req_host.find(".") == -1 and not IPV4_RE.search(req_host): - erhn = req_host + ".local" - return req_host, erhn - -def eff_request_host_lc(request): - req_host, erhn = eff_request_host(request) - return req_host.lower(), erhn.lower() - -def effective_request_host(request): - """Return the effective request-host, as defined by RFC 2965.""" - return eff_request_host(request)[1] - -def request_path(request): - """Return path component of request-URI, as defined by RFC 2965.""" - url = request.get_full_url() - path = escape_path(_rfc3986.urlsplit(url)[2]) - if not path.startswith("/"): - path = "/" + path - return path - -def request_port(request): - host = request.get_host() - i = host.find(':') - if i >= 0: - port = host[i+1:] - try: - int(port) - except ValueError: - debug("nonnumeric port: '%s'", port) - return None - else: - port = DEFAULT_HTTP_PORT - return port - -def request_is_unverifiable(request): - try: - return request.is_unverifiable() - except AttributeError: - if hasattr(request, "unverifiable"): - return request.unverifiable - else: - raise - -# Characters in addition to A-Z, a-z, 0-9, '_', '.', and '-' that don't -# need to be escaped to form a valid HTTP URL (RFCs 2396 and 1738). -HTTP_PATH_SAFE = "%/;:@&=+$,!~*'()" -ESCAPED_CHAR_RE = re.compile(r"%([0-9a-fA-F][0-9a-fA-F])") -def uppercase_escaped_char(match): - return "%%%s" % match.group(1).upper() -def escape_path(path): - """Escape any invalid characters in HTTP URL, and uppercase all escapes.""" - # There's no knowing what character encoding was used to create URLs - # containing %-escapes, but since we have to pick one to escape invalid - # path characters, we pick UTF-8, as recommended in the HTML 4.0 - # specification: - # http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.2.1 - # And here, kind of: draft-fielding-uri-rfc2396bis-03 - # (And in draft IRI specification: draft-duerst-iri-05) - # (And here, for new URI schemes: RFC 2718) - if isinstance(path, types.UnicodeType): - path = path.encode("utf-8") - path = urllib.quote(path, HTTP_PATH_SAFE) - path = ESCAPED_CHAR_RE.sub(uppercase_escaped_char, path) - return path - -def reach(h): - """Return reach of host h, as defined by RFC 2965, section 1. - - The reach R of a host name H is defined as follows: - - * If - - - H is the host domain name of a host; and, - - - H has the form A.B; and - - - A has no embedded (that is, interior) dots; and - - - B has at least one embedded dot, or B is the string "local". - then the reach of H is .B. - - * Otherwise, the reach of H is H. - - >>> reach("www.acme.com") - '.acme.com' - >>> reach("acme.com") - 'acme.com' - >>> reach("acme.local") - '.local' - - """ - i = h.find(".") - if i >= 0: - #a = h[:i] # this line is only here to show what a is - b = h[i+1:] - i = b.find(".") - if is_HDN(h) and (i >= 0 or b == "local"): - return "."+b - return h - -def is_third_party(request): - """ - - RFC 2965, section 3.3.6: - - An unverifiable transaction is to a third-party host if its request- - host U does not domain-match the reach R of the request-host O in the - origin transaction. - - """ - req_host = request_host_lc(request) - # the origin request's request-host was stuffed into request by - # _urllib2_support.AbstractHTTPHandler - return not domain_match(req_host, reach(request.origin_req_host)) - - -try: - all -except NameError: - # python 2.4 - def all(iterable): - for x in iterable: - if not x: - return False - return True - - -class Cookie: - """HTTP Cookie. - - This class represents both Netscape and RFC 2965 cookies. - - This is deliberately a very simple class. It just holds attributes. It's - possible to construct Cookie instances that don't comply with the cookie - standards. CookieJar.make_cookies is the factory function for Cookie - objects -- it deals with cookie parsing, supplying defaults, and - normalising to the representation used in this class. CookiePolicy is - responsible for checking them to see whether they should be accepted from - and returned to the server. - - version: integer; - name: string; - value: string (may be None); - port: string; None indicates no attribute was supplied (e.g. "Port", rather - than eg. "Port=80"); otherwise, a port string (eg. "80") or a port list - string (e.g. "80,8080") - port_specified: boolean; true if a value was supplied with the Port - cookie-attribute - domain: string; - domain_specified: boolean; true if Domain was explicitly set - domain_initial_dot: boolean; true if Domain as set in HTTP header by server - started with a dot (yes, this really is necessary!) - path: string; - path_specified: boolean; true if Path was explicitly set - secure: boolean; true if should only be returned over secure connection - expires: integer; seconds since epoch (RFC 2965 cookies should calculate - this value from the Max-Age attribute) - discard: boolean, true if this is a session cookie; (if no expires value, - this should be true) - comment: string; - comment_url: string; - rfc2109: boolean; true if cookie arrived in a Set-Cookie: (not - Set-Cookie2:) header, but had a version cookie-attribute of 1 - rest: mapping of other cookie-attributes - - Note that the port may be present in the headers, but unspecified ("Port" - rather than"Port=80", for example); if this is the case, port is None. - - """ - - - _attrs = ("version", "name", "value", - "port", "port_specified", - "domain", "domain_specified", "domain_initial_dot", - "path", "path_specified", - "secure", "expires", "discard", "comment", "comment_url", - "rfc2109", "_rest") - - def __init__(self, version, name, value, - port, port_specified, - domain, domain_specified, domain_initial_dot, - path, path_specified, - secure, - expires, - discard, - comment, - comment_url, - rest, - rfc2109=False, - ): - - if version is not None: version = int(version) - if expires is not None: expires = int(expires) - if port is None and port_specified is True: - raise ValueError("if port is None, port_specified must be false") - - self.version = version - self.name = name - self.value = value - self.port = port - self.port_specified = port_specified - # normalise case, as per RFC 2965 section 3.3.3 - self.domain = domain.lower() - self.domain_specified = domain_specified - # Sigh. We need to know whether the domain given in the - # cookie-attribute had an initial dot, in order to follow RFC 2965 - # (as clarified in draft errata). Needed for the returned $Domain - # value. - self.domain_initial_dot = domain_initial_dot - self.path = path - self.path_specified = path_specified - self.secure = secure - self.expires = expires - self.discard = discard - self.comment = comment - self.comment_url = comment_url - self.rfc2109 = rfc2109 - - self._rest = copy.copy(rest) - - def has_nonstandard_attr(self, name): - return self._rest.has_key(name) - def get_nonstandard_attr(self, name, default=None): - return self._rest.get(name, default) - def set_nonstandard_attr(self, name, value): - self._rest[name] = value - def nonstandard_attr_keys(self): - return self._rest.keys() - - def is_expired(self, now=None): - if now is None: now = time.time() - return (self.expires is not None) and (self.expires <= now) - - def __eq__(self, other): - return all(getattr(self, a) == getattr(other, a) for a in self._attrs) - - def __ne__(self, other): - return not (self == other) - - def __str__(self): - if self.port is None: p = "" - else: p = ":"+self.port - limit = self.domain + p + self.path - if self.value is not None: - namevalue = "%s=%s" % (self.name, self.value) - else: - namevalue = self.name - return "<Cookie %s for %s>" % (namevalue, limit) - - def __repr__(self): - args = [] - for name in ["version", "name", "value", - "port", "port_specified", - "domain", "domain_specified", "domain_initial_dot", - "path", "path_specified", - "secure", "expires", "discard", "comment", "comment_url", - ]: - attr = getattr(self, name) - args.append("%s=%s" % (name, repr(attr))) - args.append("rest=%s" % repr(self._rest)) - args.append("rfc2109=%s" % repr(self.rfc2109)) - return "Cookie(%s)" % ", ".join(args) - - -class CookiePolicy: - """Defines which cookies get accepted from and returned to server. - - May also modify cookies. - - The subclass DefaultCookiePolicy defines the standard rules for Netscape - and RFC 2965 cookies -- override that if you want a customised policy. - - As well as implementing set_ok and return_ok, implementations of this - interface must also supply the following attributes, indicating which - protocols should be used, and how. These can be read and set at any time, - though whether that makes complete sense from the protocol point of view is - doubtful. - - Public attributes: - - netscape: implement netscape protocol - rfc2965: implement RFC 2965 protocol - rfc2109_as_netscape: - WARNING: This argument will change or go away if is not accepted into - the Python standard library in this form! - If true, treat RFC 2109 cookies as though they were Netscape cookies. The - default is for this attribute to be None, which means treat 2109 cookies - as RFC 2965 cookies unless RFC 2965 handling is switched off (which it is, - by default), and as Netscape cookies otherwise. - hide_cookie2: don't add Cookie2 header to requests (the presence of - this header indicates to the server that we understand RFC 2965 - cookies) - - """ - def set_ok(self, cookie, request): - """Return true if (and only if) cookie should be accepted from server. - - Currently, pre-expired cookies never get this far -- the CookieJar - class deletes such cookies itself. - - cookie: mechanize.Cookie object - request: object implementing the interface defined by - CookieJar.extract_cookies.__doc__ - - """ - raise NotImplementedError() - - def return_ok(self, cookie, request): - """Return true if (and only if) cookie should be returned to server. - - cookie: mechanize.Cookie object - request: object implementing the interface defined by - CookieJar.add_cookie_header.__doc__ - - """ - raise NotImplementedError() - - def domain_return_ok(self, domain, request): - """Return false if cookies should not be returned, given cookie domain. - - This is here as an optimization, to remove the need for checking every - cookie with a particular domain (which may involve reading many files). - The default implementations of domain_return_ok and path_return_ok - (return True) leave all the work to return_ok. - - If domain_return_ok returns true for the cookie domain, path_return_ok - is called for the cookie path. Otherwise, path_return_ok and return_ok - are never called for that cookie domain. If path_return_ok returns - true, return_ok is called with the Cookie object itself for a full - check. Otherwise, return_ok is never called for that cookie path. - - Note that domain_return_ok is called for every *cookie* domain, not - just for the *request* domain. For example, the function might be - called with both ".acme.com" and "www.acme.com" if the request domain - is "www.acme.com". The same goes for path_return_ok. - - For argument documentation, see the docstring for return_ok. - - """ - return True - - def path_return_ok(self, path, request): - """Return false if cookies should not be returned, given cookie path. - - See the docstring for domain_return_ok. - - """ - return True - - -class DefaultCookiePolicy(CookiePolicy): - """Implements the standard rules for accepting and returning cookies. - - Both RFC 2965 and Netscape cookies are covered. RFC 2965 handling is - switched off by default. - - The easiest way to provide your own policy is to override this class and - call its methods in your overriden implementations before adding your own - additional checks. - - import mechanize - class MyCookiePolicy(mechanize.DefaultCookiePolicy): - def set_ok(self, cookie, request): - if not mechanize.DefaultCookiePolicy.set_ok( - self, cookie, request): - return False - if i_dont_want_to_store_this_cookie(): - return False - return True - - In addition to the features required to implement the CookiePolicy - interface, this class allows you to block and allow domains from setting - and receiving cookies. There are also some strictness switches that allow - you to tighten up the rather loose Netscape protocol rules a little bit (at - the cost of blocking some benign cookies). - - A domain blacklist and whitelist is provided (both off by default). Only - domains not in the blacklist and present in the whitelist (if the whitelist - is active) participate in cookie setting and returning. Use the - blocked_domains constructor argument, and blocked_domains and - set_blocked_domains methods (and the corresponding argument and methods for - allowed_domains). If you set a whitelist, you can turn it off again by - setting it to None. - - Domains in block or allow lists that do not start with a dot must - string-compare equal. For example, "acme.com" matches a blacklist entry of - "acme.com", but "www.acme.com" does not. Domains that do start with a dot - are matched by more specific domains too. For example, both "www.acme.com" - and "www.munitions.acme.com" match ".acme.com" (but "acme.com" itself does - not). IP addresses are an exception, and must match exactly. For example, - if blocked_domains contains "192.168.1.2" and ".168.1.2" 192.168.1.2 is - blocked, but 193.168.1.2 is not. - - Additional Public Attributes: - - General strictness switches - - strict_domain: don't allow sites to set two-component domains with - country-code top-level domains like .co.uk, .gov.uk, .co.nz. etc. - This is far from perfect and isn't guaranteed to work! - - RFC 2965 protocol strictness switches - - strict_rfc2965_unverifiable: follow RFC 2965 rules on unverifiable - transactions (usually, an unverifiable transaction is one resulting from - a redirect or an image hosted on another site); if this is false, cookies - are NEVER blocked on the basis of verifiability - - Netscape protocol strictness switches - - strict_ns_unverifiable: apply RFC 2965 rules on unverifiable transactions - even to Netscape cookies - strict_ns_domain: flags indicating how strict to be with domain-matching - rules for Netscape cookies: - DomainStrictNoDots: when setting cookies, host prefix must not contain a - dot (e.g. www.foo.bar.com can't set a cookie for .bar.com, because - www.foo contains a dot) - DomainStrictNonDomain: cookies that did not explicitly specify a Domain - cookie-attribute can only be returned to a domain that string-compares - equal to the domain that set the cookie (e.g. rockets.acme.com won't - be returned cookies from acme.com that had no Domain cookie-attribute) - DomainRFC2965Match: when setting cookies, require a full RFC 2965 - domain-match - DomainLiberal and DomainStrict are the most useful combinations of the - above flags, for convenience - strict_ns_set_initial_dollar: ignore cookies in Set-Cookie: headers that - have names starting with '$' - strict_ns_set_path: don't allow setting cookies whose path doesn't - path-match request URI - - """ - - DomainStrictNoDots = 1 - DomainStrictNonDomain = 2 - DomainRFC2965Match = 4 - - DomainLiberal = 0 - DomainStrict = DomainStrictNoDots|DomainStrictNonDomain - - def __init__(self, - blocked_domains=None, allowed_domains=None, - netscape=True, rfc2965=False, - # WARNING: this argument will change or go away if is not - # accepted into the Python standard library in this form! - # default, ie. treat 2109 as netscape iff not rfc2965 - rfc2109_as_netscape=None, - hide_cookie2=False, - strict_domain=False, - strict_rfc2965_unverifiable=True, - strict_ns_unverifiable=False, - strict_ns_domain=DomainLiberal, - strict_ns_set_initial_dollar=False, - strict_ns_set_path=False, - ): - """ - Constructor arguments should be used as keyword arguments only. - - blocked_domains: sequence of domain names that we never accept cookies - from, nor return cookies to - allowed_domains: if not None, this is a sequence of the only domains - for which we accept and return cookies - - For other arguments, see CookiePolicy.__doc__ and - DefaultCookiePolicy.__doc__.. - - """ - self.netscape = netscape - self.rfc2965 = rfc2965 - self.rfc2109_as_netscape = rfc2109_as_netscape - self.hide_cookie2 = hide_cookie2 - self.strict_domain = strict_domain - self.strict_rfc2965_unverifiable = strict_rfc2965_unverifiable - self.strict_ns_unverifiable = strict_ns_unverifiable - self.strict_ns_domain = strict_ns_domain - self.strict_ns_set_initial_dollar = strict_ns_set_initial_dollar - self.strict_ns_set_path = strict_ns_set_path - - if blocked_domains is not None: - self._blocked_domains = tuple(blocked_domains) - else: - self._blocked_domains = () - - if allowed_domains is not None: - allowed_domains = tuple(allowed_domains) - self._allowed_domains = allowed_domains - - def blocked_domains(self): - """Return the sequence of blocked domains (as a tuple).""" - return self._blocked_domains - def set_blocked_domains(self, blocked_domains): - """Set the sequence of blocked domains.""" - self._blocked_domains = tuple(blocked_domains) - - def is_blocked(self, domain): - for blocked_domain in self._blocked_domains: - if user_domain_match(domain, blocked_domain): - return True - return False - - def allowed_domains(self): - """Return None, or the sequence of allowed domains (as a tuple).""" - return self._allowed_domains - def set_allowed_domains(self, allowed_domains): - """Set the sequence of allowed domains, or None.""" - if allowed_domains is not None: - allowed_domains = tuple(allowed_domains) - self._allowed_domains = allowed_domains - - def is_not_allowed(self, domain): - if self._allowed_domains is None: - return False - for allowed_domain in self._allowed_domains: - if user_domain_match(domain, allowed_domain): - return False - return True - - def set_ok(self, cookie, request): - """ - If you override set_ok, be sure to call this method. If it returns - false, so should your subclass (assuming your subclass wants to be more - strict about which cookies to accept). - - """ - debug(" - checking cookie %s", cookie) - - assert cookie.name is not None - - for n in "version", "verifiability", "name", "path", "domain", "port": - fn_name = "set_ok_"+n - fn = getattr(self, fn_name) - if not fn(cookie, request): - return False - - return True - - def set_ok_version(self, cookie, request): - if cookie.version is None: - # Version is always set to 0 by parse_ns_headers if it's a Netscape - # cookie, so this must be an invalid RFC 2965 cookie. - debug(" Set-Cookie2 without version attribute (%s)", cookie) - return False - if cookie.version > 0 and not self.rfc2965: - debug(" RFC 2965 cookies are switched off") - return False - elif cookie.version == 0 and not self.netscape: - debug(" Netscape cookies are switched off") - return False - return True - - def set_ok_verifiability(self, cookie, request): - if request_is_unverifiable(request) and is_third_party(request): - if cookie.version > 0 and self.strict_rfc2965_unverifiable: - debug(" third-party RFC 2965 cookie during " - "unverifiable transaction") - return False - elif cookie.version == 0 and self.strict_ns_unverifiable: - debug(" third-party Netscape cookie during " - "unverifiable transaction") - return False - return True - - def set_ok_name(self, cookie, request): - # Try and stop servers setting V0 cookies designed to hack other - # servers that know both V0 and V1 protocols. - if (cookie.version == 0 and self.strict_ns_set_initial_dollar and - cookie.name.startswith("$")): - debug(" illegal name (starts with '$'): '%s'", cookie.name) - return False - return True - - def set_ok_path(self, cookie, request): - if cookie.path_specified: - req_path = request_path(request) - if ((cookie.version > 0 or - (cookie.version == 0 and self.strict_ns_set_path)) and - not req_path.startswith(cookie.path)): - debug(" path attribute %s is not a prefix of request " - "path %s", cookie.path, req_path) - return False - return True - - def set_ok_countrycode_domain(self, cookie, request): - """Return False if explicit cookie domain is not acceptable. - - Called by set_ok_domain, for convenience of overriding by - subclasses. - - """ - if cookie.domain_specified and self.strict_domain: - domain = cookie.domain - # since domain was specified, we know that: - assert domain.startswith(".") - if domain.count(".") == 2: - # domain like .foo.bar - i = domain.rfind(".") - tld = domain[i+1:] - sld = domain[1:i] - if (sld.lower() in [ - "co", "ac", - "com", "edu", "org", "net", "gov", "mil", "int", - "aero", "biz", "cat", "coop", "info", "jobs", "mobi", - "museum", "name", "pro", "travel", - ] and - len(tld) == 2): - # domain like .co.uk - return False - return True - - def set_ok_domain(self, cookie, request): - if self.is_blocked(cookie.domain): - debug(" domain %s is in user block-list", cookie.domain) - return False - if self.is_not_allowed(cookie.domain): - debug(" domain %s is not in user allow-list", cookie.domain) - return False - if not self.set_ok_countrycode_domain(cookie, request): - debug(" country-code second level domain %s", cookie.domain) - return False - if cookie.domain_specified: - req_host, erhn = eff_request_host_lc(request) - domain = cookie.domain - if domain.startswith("."): - undotted_domain = domain[1:] - else: - undotted_domain = domain - embedded_dots = (undotted_domain.find(".") >= 0) - if not embedded_dots and domain != ".local": - debug(" non-local domain %s contains no embedded dot", - domain) - return False - if cookie.version == 0: - if (not erhn.endswith(domain) and - (not erhn.startswith(".") and - not ("."+erhn).endswith(domain))): - debug(" effective request-host %s (even with added " - "initial dot) does not end end with %s", - erhn, domain) - return False - if (cookie.version > 0 or - (self.strict_ns_domain & self.DomainRFC2965Match)): - if not domain_match(erhn, domain): - debug(" effective request-host %s does not domain-match " - "%s", erhn, domain) - return False - if (cookie.version > 0 or - (self.strict_ns_domain & self.DomainStrictNoDots)): - host_prefix = req_host[:-len(domain)] - if (host_prefix.find(".") >= 0 and - not IPV4_RE.search(req_host)): - debug(" host prefix %s for domain %s contains a dot", - host_prefix, domain) - return False - return True - - def set_ok_port(self, cookie, request): - if cookie.port_specified: - req_port = request_port(request) - if req_port is None: - req_port = "80" - else: - req_port = str(req_port) - for p in cookie.port.split(","): - try: - int(p) - except ValueError: - debug(" bad port %s (not numeric)", p) - return False - if p == req_port: - break - else: - debug(" request port (%s) not found in %s", - req_port, cookie.port) - return False - return True - - def return_ok(self, cookie, request): - """ - If you override return_ok, be sure to call this method. If it returns - false, so should your subclass (assuming your subclass wants to be more - strict about which cookies to return). - - """ - # Path has already been checked by path_return_ok, and domain blocking - # done by domain_return_ok. - debug(" - checking cookie %s", cookie) - - for n in ("version", "verifiability", "secure", "expires", "port", - "domain"): - fn_name = "return_ok_"+n - fn = getattr(self, fn_name) - if not fn(cookie, request): - return False - return True - - def return_ok_version(self, cookie, request): - if cookie.version > 0 and not self.rfc2965: - debug(" RFC 2965 cookies are switched off") - return False - elif cookie.version == 0 and not self.netscape: - debug(" Netscape cookies are switched off") - return False - return True - - def return_ok_verifiability(self, cookie, request): - if request_is_unverifiable(request) and is_third_party(request): - if cookie.version > 0 and self.strict_rfc2965_unverifiable: - debug(" third-party RFC 2965 cookie during unverifiable " - "transaction") - return False - elif cookie.version == 0 and self.strict_ns_unverifiable: - debug(" third-party Netscape cookie during unverifiable " - "transaction") - return False - return True - - def return_ok_secure(self, cookie, request): - if cookie.secure and request.get_type() != "https": - debug(" secure cookie with non-secure request") - return False - return True - - def return_ok_expires(self, cookie, request): - if cookie.is_expired(self._now): - debug(" cookie expired") - return False - return True - - def return_ok_port(self, cookie, request): - if cookie.port: - req_port = request_port(request) - if req_port is None: - req_port = "80" - for p in cookie.port.split(","): - if p == req_port: - break - else: - debug(" request port %s does not match cookie port %s", - req_port, cookie.port) - return False - return True - - def return_ok_domain(self, cookie, request): - req_host, erhn = eff_request_host_lc(request) - domain = cookie.domain - - # strict check of non-domain cookies: Mozilla does this, MSIE5 doesn't - if (cookie.version == 0 and - (self.strict_ns_domain & self.DomainStrictNonDomain) and - not cookie.domain_specified and domain != erhn): - debug(" cookie with unspecified domain does not string-compare " - "equal to request domain") - return False - - if cookie.version > 0 and not domain_match(erhn, domain): - debug(" effective request-host name %s does not domain-match " - "RFC 2965 cookie domain %s", erhn, domain) - return False - if cookie.version == 0 and not ("."+erhn).endswith(domain): - debug(" request-host %s does not match Netscape cookie domain " - "%s", req_host, domain) - return False - return True - - def domain_return_ok(self, domain, request): - # Liberal check of domain. This is here as an optimization to avoid - # having to load lots of MSIE cookie files unless necessary. - - # Munge req_host and erhn to always start with a dot, so as to err on - # the side of letting cookies through. - dotted_req_host, dotted_erhn = eff_request_host_lc(request) - if not dotted_req_host.startswith("."): - dotted_req_host = "."+dotted_req_host - if not dotted_erhn.startswith("."): - dotted_erhn = "."+dotted_erhn - if not (dotted_req_host.endswith(domain) or - dotted_erhn.endswith(domain)): - #debug(" request domain %s does not match cookie domain %s", - # req_host, domain) - return False - - if self.is_blocked(domain): - debug(" domain %s is in user block-list", domain) - return False - if self.is_not_allowed(domain): - debug(" domain %s is not in user allow-list", domain) - return False - - return True - - def path_return_ok(self, path, request): - debug("- checking cookie path=%s", path) - req_path = request_path(request) - if not req_path.startswith(path): - debug(" %s does not path-match %s", req_path, path) - return False - return True - - -def vals_sorted_by_key(adict): - keys = adict.keys() - keys.sort() - return map(adict.get, keys) - -class MappingIterator: - """Iterates over nested mapping, depth-first, in sorted order by key.""" - def __init__(self, mapping): - self._s = [(vals_sorted_by_key(mapping), 0, None)] # LIFO stack - - def __iter__(self): return self - - def next(self): - # this is hairy because of lack of generators - while 1: - try: - vals, i, prev_item = self._s.pop() - except IndexError: - raise StopIteration() - if i < len(vals): - item = vals[i] - i = i + 1 - self._s.append((vals, i, prev_item)) - try: - item.items - except AttributeError: - # non-mapping - break - else: - # mapping - self._s.append((vals_sorted_by_key(item), 0, item)) - continue - return item - - -# Used as second parameter to dict.get method, to distinguish absent -# dict key from one with a None value. -class Absent: pass - -class CookieJar: - """Collection of HTTP cookies. - - You may not need to know about this class: try mechanize.urlopen(). - - The major methods are extract_cookies and add_cookie_header; these are all - you are likely to need. - - CookieJar supports the iterator protocol: - - for cookie in cookiejar: - # do something with cookie - - Methods: - - add_cookie_header(request) - extract_cookies(response, request) - get_policy() - set_policy(policy) - cookies_for_request(request) - make_cookies(response, request) - set_cookie_if_ok(cookie, request) - set_cookie(cookie) - clear_session_cookies() - clear_expired_cookies() - clear(domain=None, path=None, name=None) - - Public attributes - - policy: CookiePolicy object - - """ - - non_word_re = re.compile(r"\W") - quote_re = re.compile(r"([\"\\])") - strict_domain_re = re.compile(r"\.?[^.]*") - domain_re = re.compile(r"[^.]*") - dots_re = re.compile(r"^\.+") - - def __init__(self, policy=None): - """ - See CookieJar.__doc__ for argument documentation. - - """ - if policy is None: - policy = DefaultCookiePolicy() - self._policy = policy - - self._cookies = {} - - # for __getitem__ iteration in pre-2.2 Pythons - self._prev_getitem_index = 0 - - def get_policy(self): - return self._policy - - def set_policy(self, policy): - self._policy = policy - - def _cookies_for_domain(self, domain, request): - cookies = [] - if not self._policy.domain_return_ok(domain, request): - return [] - debug("Checking %s for cookies to return", domain) - cookies_by_path = self._cookies[domain] - for path in cookies_by_path.keys(): - if not self._policy.path_return_ok(path, request): - continue - cookies_by_name = cookies_by_path[path] - for cookie in cookies_by_name.values(): - if not self._policy.return_ok(cookie, request): - debug(" not returning cookie") - continue - debug(" it's a match") - cookies.append(cookie) - return cookies - - def cookies_for_request(self, request): - """Return a list of cookies to be returned to server. - - The returned list of cookie instances is sorted in the order they - should appear in the Cookie: header for return to the server. - - See add_cookie_header.__doc__ for the interface required of the - request argument. - - New in version 0.1.10 - - """ - self._policy._now = self._now = int(time.time()) - cookies = self._cookies_for_request(request) - # add cookies in order of most specific (i.e. longest) path first - def decreasing_size(a, b): return cmp(len(b.path), len(a.path)) - cookies.sort(decreasing_size) - return cookies - - def _cookies_for_request(self, request): - """Return a list of cookies to be returned to server.""" - # this method still exists (alongside cookies_for_request) because it - # is part of an implied protected interface for subclasses of cookiejar - # XXX document that implied interface, or provide another way of - # implementing cookiejars than subclassing - cookies = [] - for domain in self._cookies.keys(): - cookies.extend(self._cookies_for_domain(domain, request)) - return cookies - - def _cookie_attrs(self, cookies): - """Return a list of cookie-attributes to be returned to server. - - The $Version attribute is also added when appropriate (currently only - once per request). - - >>> jar = CookieJar() - >>> ns_cookie = Cookie(0, "foo", '"bar"', None, False, - ... "example.com", False, False, - ... "/", False, False, None, True, - ... None, None, {}) - >>> jar._cookie_attrs([ns_cookie]) - ['foo="bar"'] - >>> rfc2965_cookie = Cookie(1, "foo", "bar", None, False, - ... ".example.com", True, False, - ... "/", False, False, None, True, - ... None, None, {}) - >>> jar._cookie_attrs([rfc2965_cookie]) - ['$Version=1', 'foo=bar', '$Domain="example.com"'] - - """ - version_set = False - - attrs = [] - for cookie in cookies: - # set version of Cookie header - # XXX - # What should it be if multiple matching Set-Cookie headers have - # different versions themselves? - # Answer: there is no answer; was supposed to be settled by - # RFC 2965 errata, but that may never appear... - version = cookie.version - if not version_set: - version_set = True - if version > 0: - attrs.append("$Version=%s" % version) - - # quote cookie value if necessary - # (not for Netscape protocol, which already has any quotes - # intact, due to the poorly-specified Netscape Cookie: syntax) - if ((cookie.value is not None) and - self.non_word_re.search(cookie.value) and version > 0): - value = self.quote_re.sub(r"\\\1", cookie.value) - else: - value = cookie.value - - # add cookie-attributes to be returned in Cookie header - if cookie.value is None: - attrs.append(cookie.name) - else: - attrs.append("%s=%s" % (cookie.name, value)) - if version > 0: - if cookie.path_specified: - attrs.append('$Path="%s"' % cookie.path) - if cookie.domain.startswith("."): - domain = cookie.domain - if (not cookie.domain_initial_dot and - domain.startswith(".")): - domain = domain[1:] - attrs.append('$Domain="%s"' % domain) - if cookie.port is not None: - p = "$Port" - if cookie.port_specified: - p = p + ('="%s"' % cookie.port) - attrs.append(p) - - return attrs - - def add_cookie_header(self, request): - """Add correct Cookie: header to request (mechanize.Request object). - - The Cookie2 header is also added unless policy.hide_cookie2 is true. - - The request object (usually a mechanize.Request instance) must support - the methods get_full_url, get_host, is_unverifiable, get_type, - has_header, get_header, header_items and add_unredirected_header, as - documented by urllib2. - """ - debug("add_cookie_header") - cookies = self.cookies_for_request(request) - - attrs = self._cookie_attrs(cookies) - if attrs: - if not request.has_header("Cookie"): - request.add_unredirected_header("Cookie", "; ".join(attrs)) - - # if necessary, advertise that we know RFC 2965 - if self._policy.rfc2965 and not self._policy.hide_cookie2: - for cookie in cookies: - if cookie.version != 1 and not request.has_header("Cookie2"): - request.add_unredirected_header("Cookie2", '$Version="1"') - break - - self.clear_expired_cookies() - - def _normalized_cookie_tuples(self, attrs_set): - """Return list of tuples containing normalised cookie information. - - attrs_set is the list of lists of key,value pairs extracted from - the Set-Cookie or Set-Cookie2 headers. - - Tuples are name, value, standard, rest, where name and value are the - cookie name and value, standard is a dictionary containing the standard - cookie-attributes (discard, secure, version, expires or max-age, - domain, path and port) and rest is a dictionary containing the rest of - the cookie-attributes. - - """ - cookie_tuples = [] - - boolean_attrs = "discard", "secure" - value_attrs = ("version", - "expires", "max-age", - "domain", "path", "port", - "comment", "commenturl") - - for cookie_attrs in attrs_set: - name, value = cookie_attrs[0] - - # Build dictionary of standard cookie-attributes (standard) and - # dictionary of other cookie-attributes (rest). - - # Note: expiry time is normalised to seconds since epoch. V0 - # cookies should have the Expires cookie-attribute, and V1 cookies - # should have Max-Age, but since V1 includes RFC 2109 cookies (and - # since V0 cookies may be a mish-mash of Netscape and RFC 2109), we - # accept either (but prefer Max-Age). - max_age_set = False - - bad_cookie = False - - standard = {} - rest = {} - for k, v in cookie_attrs[1:]: - lc = k.lower() - # don't lose case distinction for unknown fields - if lc in value_attrs or lc in boolean_attrs: - k = lc - if k in boolean_attrs and v is None: - # boolean cookie-attribute is present, but has no value - # (like "discard", rather than "port=80") - v = True - if standard.has_key(k): - # only first value is significant - continue - if k == "domain": - if v is None: - debug(" missing value for domain attribute") - bad_cookie = True - break - # RFC 2965 section 3.3.3 - v = v.lower() - if k == "expires": - if max_age_set: - # Prefer max-age to expires (like Mozilla) - continue - if v is None: - debug(" missing or invalid value for expires " - "attribute: treating as session cookie") - continue - if k == "max-age": - max_age_set = True - if v is None: - debug(" missing value for max-age attribute") - bad_cookie = True - break - try: - v = int(v) - except ValueError: - debug(" missing or invalid (non-numeric) value for " - "max-age attribute") - bad_cookie = True - break - # convert RFC 2965 Max-Age to seconds since epoch - # XXX Strictly you're supposed to follow RFC 2616 - # age-calculation rules. Remember that zero Max-Age is a - # is a request to discard (old and new) cookie, though. - k = "expires" - v = self._now + v - if (k in value_attrs) or (k in boolean_attrs): - if (v is None and - k not in ["port", "comment", "commenturl"]): - debug(" missing value for %s attribute" % k) - bad_cookie = True - break - standard[k] = v - else: - rest[k] = v - - if bad_cookie: - continue - - cookie_tuples.append((name, value, standard, rest)) - - return cookie_tuples - - def _cookie_from_cookie_tuple(self, tup, request): - # standard is dict of standard cookie-attributes, rest is dict of the - # rest of them - name, value, standard, rest = tup - - domain = standard.get("domain", Absent) - path = standard.get("path", Absent) - port = standard.get("port", Absent) - expires = standard.get("expires", Absent) - - # set the easy defaults - version = standard.get("version", None) - if version is not None: - try: - version = int(version) - except ValueError: - return None # invalid version, ignore cookie - secure = standard.get("secure", False) - # (discard is also set if expires is Absent) - discard = standard.get("discard", False) - comment = standard.get("comment", None) - comment_url = standard.get("commenturl", None) - - # set default path - if path is not Absent and path != "": - path_specified = True - path = escape_path(path) - else: - path_specified = False - path = request_path(request) - i = path.rfind("/") - if i != -1: - if version == 0: - # Netscape spec parts company from reality here - path = path[:i] - else: - path = path[:i+1] - if len(path) == 0: path = "/" - - # set default domain - domain_specified = domain is not Absent - # but first we have to remember whether it starts with a dot - domain_initial_dot = False - if domain_specified: - domain_initial_dot = bool(domain.startswith(".")) - if domain is Absent: - req_host, erhn = eff_request_host_lc(request) - domain = erhn - elif not domain.startswith("."): - domain = "."+domain - - # set default port - port_specified = False - if port is not Absent: - if port is None: - # Port attr present, but has no value: default to request port. - # Cookie should then only be sent back on that port. - port = request_port(request) - else: - port_specified = True - port = re.sub(r"\s+", "", port) - else: - # No port attr present. Cookie can be sent back on any port. - port = None - - # set default expires and discard - if expires is Absent: - expires = None - discard = True - - return Cookie(version, - name, value, - port, port_specified, - domain, domain_specified, domain_initial_dot, - path, path_specified, - secure, - expires, - discard, - comment, - comment_url, - rest) - - def _cookies_from_attrs_set(self, attrs_set, request): - cookie_tuples = self._normalized_cookie_tuples(attrs_set) - - cookies = [] - for tup in cookie_tuples: - cookie = self._cookie_from_cookie_tuple(tup, request) - if cookie: cookies.append(cookie) - return cookies - - def _process_rfc2109_cookies(self, cookies): - if self._policy.rfc2109_as_netscape is None: - rfc2109_as_netscape = not self._policy.rfc2965 - else: - rfc2109_as_netscape = self._policy.rfc2109_as_netscape - for cookie in cookies: - if cookie.version == 1: - cookie.rfc2109 = True - if rfc2109_as_netscape: - # treat 2109 cookies as Netscape cookies rather than - # as RFC2965 cookies - cookie.version = 0 - - def _make_cookies(self, response, request): - # get cookie-attributes for RFC 2965 and Netscape protocols - headers = response.info() - rfc2965_hdrs = headers.getheaders("Set-Cookie2") - ns_hdrs = headers.getheaders("Set-Cookie") - - rfc2965 = self._policy.rfc2965 - netscape = self._policy.netscape - - if ((not rfc2965_hdrs and not ns_hdrs) or - (not ns_hdrs and not rfc2965) or - (not rfc2965_hdrs and not netscape) or - (not netscape and not rfc2965)): - return [] # no relevant cookie headers: quick exit - - try: - cookies = self._cookies_from_attrs_set( - split_header_words(rfc2965_hdrs), request) - except: - reraise_unmasked_exceptions() - cookies = [] - - if ns_hdrs and netscape: - try: - # RFC 2109 and Netscape cookies - ns_cookies = self._cookies_from_attrs_set( - parse_ns_headers(ns_hdrs), request) - except: - reraise_unmasked_exceptions() - ns_cookies = [] - self._process_rfc2109_cookies(ns_cookies) - - # Look for Netscape cookies (from Set-Cookie headers) that match - # corresponding RFC 2965 cookies (from Set-Cookie2 headers). - # For each match, keep the RFC 2965 cookie and ignore the Netscape - # cookie (RFC 2965 section 9.1). Actually, RFC 2109 cookies are - # bundled in with the Netscape cookies for this purpose, which is - # reasonable behaviour. - if rfc2965: - lookup = {} - for cookie in cookies: - lookup[(cookie.domain, cookie.path, cookie.name)] = None - - def no_matching_rfc2965(ns_cookie, lookup=lookup): - key = ns_cookie.domain, ns_cookie.path, ns_cookie.name - return not lookup.has_key(key) - ns_cookies = filter(no_matching_rfc2965, ns_cookies) - - if ns_cookies: - cookies.extend(ns_cookies) - - return cookies - - def make_cookies(self, response, request): - """Return sequence of Cookie objects extracted from response object. - - See extract_cookies.__doc__ for the interface required of the - response and request arguments. - - """ - self._policy._now = self._now = int(time.time()) - return [cookie for cookie in self._make_cookies(response, request) - if cookie.expires is None or not cookie.expires <= self._now] - - def set_cookie_if_ok(self, cookie, request): - """Set a cookie if policy says it's OK to do so. - - cookie: mechanize.Cookie instance - request: see extract_cookies.__doc__ for the required interface - - """ - self._policy._now = self._now = int(time.time()) - - if self._policy.set_ok(cookie, request): - self.set_cookie(cookie) - - def set_cookie(self, cookie): - """Set a cookie, without checking whether or not it should be set. - - cookie: mechanize.Cookie instance - """ - c = self._cookies - if not c.has_key(cookie.domain): c[cookie.domain] = {} - c2 = c[cookie.domain] - if not c2.has_key(cookie.path): c2[cookie.path] = {} - c3 = c2[cookie.path] - c3[cookie.name] = cookie - - def extract_cookies(self, response, request): - """Extract cookies from response, where allowable given the request. - - Look for allowable Set-Cookie: and Set-Cookie2: headers in the response - object passed as argument. Any of these headers that are found are - used to update the state of the object (subject to the policy.set_ok - method's approval). - - The response object (usually be the result of a call to - mechanize.urlopen, or similar) should support an info method, which - returns a mimetools.Message object (in fact, the 'mimetools.Message - object' may be any object that provides a getheaders method). - - The request object (usually a mechanize.Request instance) must support - the methods get_full_url, get_type, get_host, and is_unverifiable, as - documented by mechanize, and the port attribute (the port number). The - request is used to set default values for cookie-attributes as well as - for checking that the cookie is OK to be set. - - """ - debug("extract_cookies: %s", response.info()) - self._policy._now = self._now = int(time.time()) - - for cookie in self._make_cookies(response, request): - if cookie.expires is not None and cookie.expires <= self._now: - # Expiry date in past is request to delete cookie. This can't be - # in DefaultCookiePolicy, because can't delete cookies there. - try: - self.clear(cookie.domain, cookie.path, cookie.name) - except KeyError: - pass - debug("Expiring cookie, domain='%s', path='%s', name='%s'", - cookie.domain, cookie.path, cookie.name) - elif self._policy.set_ok(cookie, request): - debug(" setting cookie: %s", cookie) - self.set_cookie(cookie) - - def clear(self, domain=None, path=None, name=None): - """Clear some cookies. - - Invoking this method without arguments will clear all cookies. If - given a single argument, only cookies belonging to that domain will be - removed. If given two arguments, cookies belonging to the specified - path within that domain are removed. If given three arguments, then - the cookie with the specified name, path and domain is removed. - - Raises KeyError if no matching cookie exists. - - """ - if name is not None: - if (domain is None) or (path is None): - raise ValueError( - "domain and path must be given to remove a cookie by name") - del self._cookies[domain][path][name] - elif path is not None: - if domain is None: - raise ValueError( - "domain must be given to remove cookies by path") - del self._cookies[domain][path] - elif domain is not None: - del self._cookies[domain] - else: - self._cookies = {} - - def clear_session_cookies(self): - """Discard all session cookies. - - Discards all cookies held by object which had either no Max-Age or - Expires cookie-attribute or an explicit Discard cookie-attribute, or - which otherwise have ended up with a true discard attribute. For - interactive browsers, the end of a session usually corresponds to - closing the browser window. - - Note that the save method won't save session cookies anyway, unless you - ask otherwise by passing a true ignore_discard argument. - - """ - for cookie in self: - if cookie.discard: - self.clear(cookie.domain, cookie.path, cookie.name) - - def clear_expired_cookies(self): - """Discard all expired cookies. - - You probably don't need to call this method: expired cookies are never - sent back to the server (provided you're using DefaultCookiePolicy), - this method is called by CookieJar itself every so often, and the save - method won't save expired cookies anyway (unless you ask otherwise by - passing a true ignore_expires argument). - - """ - now = time.time() - for cookie in self: - if cookie.is_expired(now): - self.clear(cookie.domain, cookie.path, cookie.name) - - def __getitem__(self, i): - if i == 0: - self._getitem_iterator = self.__iter__() - elif self._prev_getitem_index != i-1: raise IndexError( - "CookieJar.__getitem__ only supports sequential iteration") - self._prev_getitem_index = i - try: - return self._getitem_iterator.next() - except StopIteration: - raise IndexError() - - def __iter__(self): - return MappingIterator(self._cookies) - - def __len__(self): - """Return number of contained cookies.""" - i = 0 - for cookie in self: i = i + 1 - return i - - def __repr__(self): - r = [] - for cookie in self: r.append(repr(cookie)) - return "<%s[%s]>" % (self.__class__, ", ".join(r)) - - def __str__(self): - r = [] - for cookie in self: r.append(str(cookie)) - return "<%s[%s]>" % (self.__class__, ", ".join(r)) - - -class LoadError(Exception): pass - -class FileCookieJar(CookieJar): - """CookieJar that can be loaded from and saved to a file. - - Additional methods - - save(filename=None, ignore_discard=False, ignore_expires=False) - load(filename=None, ignore_discard=False, ignore_expires=False) - revert(filename=None, ignore_discard=False, ignore_expires=False) - - Additional public attributes - - filename: filename for loading and saving cookies - - Additional public readable attributes - - delayload: request that cookies are lazily loaded from disk; this is only - a hint since this only affects performance, not behaviour (unless the - cookies on disk are changing); a CookieJar object may ignore it (in fact, - only MSIECookieJar lazily loads cookies at the moment) - - """ - - def __init__(self, filename=None, delayload=False, policy=None): - """ - See FileCookieJar.__doc__ for argument documentation. - - Cookies are NOT loaded from the named file until either the load or - revert method is called. - - """ - CookieJar.__init__(self, policy) - if filename is not None and not isstringlike(filename): - raise ValueError("filename must be string-like") - self.filename = filename - self.delayload = bool(delayload) - - def save(self, filename=None, ignore_discard=False, ignore_expires=False): - """Save cookies to a file. - - filename: name of file in which to save cookies - ignore_discard: save even cookies set to be discarded - ignore_expires: save even cookies that have expired - - The file is overwritten if it already exists, thus wiping all its - cookies. Saved cookies can be restored later using the load or revert - methods. If filename is not specified, self.filename is used; if - self.filename is None, ValueError is raised. - - """ - raise NotImplementedError() - - def load(self, filename=None, ignore_discard=False, ignore_expires=False): - """Load cookies from a file. - - Old cookies are kept unless overwritten by newly loaded ones. - - Arguments are as for .save(). - - If filename is not specified, self.filename is used; if self.filename - is None, ValueError is raised. The named file must be in the format - understood by the class, or LoadError will be raised. This format will - be identical to that written by the save method, unless the load format - is not sufficiently well understood (as is the case for MSIECookieJar). - - """ - if filename is None: - if self.filename is not None: filename = self.filename - else: raise ValueError(MISSING_FILENAME_TEXT) - - f = open(filename) - try: - self._really_load(f, filename, ignore_discard, ignore_expires) - finally: - f.close() - - def revert(self, filename=None, - ignore_discard=False, ignore_expires=False): - """Clear all cookies and reload cookies from a saved file. - - Raises LoadError (or IOError) if reversion is not successful; the - object's state will not be altered if this happens. - - """ - if filename is None: - if self.filename is not None: filename = self.filename - else: raise ValueError(MISSING_FILENAME_TEXT) - - old_state = copy.deepcopy(self._cookies) - self._cookies = {} - try: - self.load(filename, ignore_discard, ignore_expires) - except (LoadError, IOError): - self._cookies = old_state - raise diff --git a/plugin.video.alfa/lib/mechanize/_debug.py b/plugin.video.alfa/lib/mechanize/_debug.py deleted file mode 100755 index c17a06ce..00000000 --- a/plugin.video.alfa/lib/mechanize/_debug.py +++ /dev/null @@ -1,28 +0,0 @@ -import logging - -from _response import response_seek_wrapper -from _urllib2_fork import BaseHandler - - -class HTTPResponseDebugProcessor(BaseHandler): - handler_order = 900 # before redirections, after everything else - - def http_response(self, request, response): - if not hasattr(response, "seek"): - response = response_seek_wrapper(response) - info = logging.getLogger("mechanize.http_responses").info - try: - info(response.read()) - finally: - response.seek(0) - info("*****************************************************") - return response - - https_response = http_response - -class HTTPRedirectDebugProcessor(BaseHandler): - def http_request(self, request): - if hasattr(request, "redirect_dict"): - info = logging.getLogger("mechanize.http_redirects").info - info("redirecting to %s", request.get_full_url()) - return request diff --git a/plugin.video.alfa/lib/mechanize/_firefox3cookiejar.py b/plugin.video.alfa/lib/mechanize/_firefox3cookiejar.py deleted file mode 100755 index 83fcd21a..00000000 --- a/plugin.video.alfa/lib/mechanize/_firefox3cookiejar.py +++ /dev/null @@ -1,248 +0,0 @@ -"""Firefox 3 "cookies.sqlite" cookie persistence. - -Copyright 2008 John J Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -import logging -import time - -from _clientcookie import CookieJar, Cookie, MappingIterator -from _util import isstringlike, experimental -debug = logging.getLogger("mechanize.cookies").debug - - -class Firefox3CookieJar(CookieJar): - - """Firefox 3 cookie jar. - - The cookies are stored in Firefox 3's "cookies.sqlite" format. - - Constructor arguments: - - filename: filename of cookies.sqlite (typically found at the top level - of a firefox profile directory) - autoconnect: as a convenience, connect to the SQLite cookies database at - Firefox3CookieJar construction time (default True) - policy: an object satisfying the mechanize.CookiePolicy interface - - Note that this is NOT a FileCookieJar, and there are no .load(), - .save() or .restore() methods. The database is in sync with the - cookiejar object's state after each public method call. - - Following Firefox's own behaviour, session cookies are never saved to - the database. - - The file is created, and an sqlite database written to it, if it does - not already exist. The moz_cookies database table is created if it does - not already exist. - """ - - # XXX - # handle DatabaseError exceptions - # add a FileCookieJar (explicit .save() / .revert() / .load() methods) - - def __init__(self, filename, autoconnect=True, policy=None): - experimental("Firefox3CookieJar is experimental code") - CookieJar.__init__(self, policy) - if filename is not None and not isstringlike(filename): - raise ValueError("filename must be string-like") - self.filename = filename - self._conn = None - if autoconnect: - self.connect() - - def connect(self): - import sqlite3 # not available in Python 2.4 stdlib - self._conn = sqlite3.connect(self.filename) - self._conn.isolation_level = "DEFERRED" - self._create_table_if_necessary() - - def close(self): - self._conn.close() - - def _transaction(self, func): - try: - cur = self._conn.cursor() - try: - result = func(cur) - finally: - cur.close() - except: - self._conn.rollback() - raise - else: - self._conn.commit() - return result - - def _execute(self, query, params=()): - return self._transaction(lambda cur: cur.execute(query, params)) - - def _query(self, query, params=()): - # XXX should we bother with a transaction? - cur = self._conn.cursor() - try: - cur.execute(query, params) - return cur.fetchall() - finally: - cur.close() - - def _create_table_if_necessary(self): - self._execute("""\ -CREATE TABLE IF NOT EXISTS moz_cookies (id INTEGER PRIMARY KEY, name TEXT, - value TEXT, host TEXT, path TEXT,expiry INTEGER, - lastAccessed INTEGER, isSecure INTEGER, isHttpOnly INTEGER)""") - - def _cookie_from_row(self, row): - (pk, name, value, domain, path, expires, - last_accessed, secure, http_only) = row - - version = 0 - domain = domain.encode("ascii", "ignore") - path = path.encode("ascii", "ignore") - name = name.encode("ascii", "ignore") - value = value.encode("ascii", "ignore") - secure = bool(secure) - - # last_accessed isn't a cookie attribute, so isn't added to rest - rest = {} - if http_only: - rest["HttpOnly"] = None - - if name == "": - name = value - value = None - - initial_dot = domain.startswith(".") - domain_specified = initial_dot - - discard = False - if expires == "": - expires = None - discard = True - - return Cookie(version, name, value, - None, False, - domain, domain_specified, initial_dot, - path, False, - secure, - expires, - discard, - None, - None, - rest) - - def clear(self, domain=None, path=None, name=None): - CookieJar.clear(self, domain, path, name) - where_parts = [] - sql_params = [] - if domain is not None: - where_parts.append("host = ?") - sql_params.append(domain) - if path is not None: - where_parts.append("path = ?") - sql_params.append(path) - if name is not None: - where_parts.append("name = ?") - sql_params.append(name) - where = " AND ".join(where_parts) - if where: - where = " WHERE " + where - def clear(cur): - cur.execute("DELETE FROM moz_cookies%s" % where, - tuple(sql_params)) - self._transaction(clear) - - def _row_from_cookie(self, cookie, cur): - expires = cookie.expires - if cookie.discard: - expires = "" - - domain = unicode(cookie.domain) - path = unicode(cookie.path) - name = unicode(cookie.name) - value = unicode(cookie.value) - secure = bool(int(cookie.secure)) - - if value is None: - value = name - name = "" - - last_accessed = int(time.time()) - http_only = cookie.has_nonstandard_attr("HttpOnly") - - query = cur.execute("""SELECT MAX(id) + 1 from moz_cookies""") - pk = query.fetchone()[0] - if pk is None: - pk = 1 - - return (pk, name, value, domain, path, expires, - last_accessed, secure, http_only) - - def set_cookie(self, cookie): - if cookie.discard: - CookieJar.set_cookie(self, cookie) - return - - def set_cookie(cur): - # XXX - # is this RFC 2965-correct? - # could this do an UPDATE instead? - row = self._row_from_cookie(cookie, cur) - name, unused, domain, path = row[1:5] - cur.execute("""\ -DELETE FROM moz_cookies WHERE host = ? AND path = ? AND name = ?""", - (domain, path, name)) - cur.execute("""\ -INSERT INTO moz_cookies VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?) -""", row) - self._transaction(set_cookie) - - def __iter__(self): - # session (non-persistent) cookies - for cookie in MappingIterator(self._cookies): - yield cookie - # persistent cookies - for row in self._query("""\ -SELECT * FROM moz_cookies ORDER BY name, path, host"""): - yield self._cookie_from_row(row) - - def _cookies_for_request(self, request): - session_cookies = CookieJar._cookies_for_request(self, request) - def get_cookies(cur): - query = cur.execute("SELECT host from moz_cookies") - domains = [row[0] for row in query.fetchall()] - cookies = [] - for domain in domains: - cookies += self._persistent_cookies_for_domain(domain, - request, cur) - return cookies - persistent_coookies = self._transaction(get_cookies) - return session_cookies + persistent_coookies - - def _persistent_cookies_for_domain(self, domain, request, cur): - cookies = [] - if not self._policy.domain_return_ok(domain, request): - return [] - debug("Checking %s for cookies to return", domain) - query = cur.execute("""\ -SELECT * from moz_cookies WHERE host = ? ORDER BY path""", - (domain,)) - cookies = [self._cookie_from_row(row) for row in query.fetchall()] - last_path = None - r = [] - for cookie in cookies: - if (cookie.path != last_path and - not self._policy.path_return_ok(cookie.path, request)): - last_path = cookie.path - continue - if not self._policy.return_ok(cookie, request): - debug(" not returning cookie") - continue - debug(" it's a match") - r.append(cookie) - return r diff --git a/plugin.video.alfa/lib/mechanize/_form.py b/plugin.video.alfa/lib/mechanize/_form.py deleted file mode 100755 index ed2b13b4..00000000 --- a/plugin.video.alfa/lib/mechanize/_form.py +++ /dev/null @@ -1,3280 +0,0 @@ -"""HTML form handling for web clients. - -HTML form handling for web clients: useful for parsing HTML forms, filling them -in and returning the completed forms to the server. This code developed from a -port of Gisle Aas' Perl module HTML::Form, from the libwww-perl library, but -the interface is not the same. - -The most useful docstring is the one for HTMLForm. - -RFC 1866: HTML 2.0 -RFC 1867: Form-based File Upload in HTML -RFC 2388: Returning Values from Forms: multipart/form-data -HTML 3.2 Specification, W3C Recommendation 14 January 1997 (for ISINDEX) -HTML 4.01 Specification, W3C Recommendation 24 December 1999 - - -Copyright 2002-2007 John J. Lee <jjl@pobox.com> -Copyright 2005 Gary Poster -Copyright 2005 Zope Corporation -Copyright 1998-2000 Gisle Aas. - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -# TODO: -# Clean up post the merge into mechanize -# * Remove code that was duplicated in ClientForm and mechanize -# * Remove weird import stuff -# * Remove pre-Python 2.4 compatibility cruft -# * Clean up tests -# * Later release: Remove the ClientForm 0.1 backwards-compatibility switch -# Remove parser testing hack -# Clean action URI -# Switch to unicode throughout -# See Wichert Akkerman's 2004-01-22 message to c.l.py. -# Apply recommendations from google code project CURLIES -# Apply recommendations from HTML 5 spec -# Add charset parameter to Content-type headers? How to find value?? -# Functional tests to add: -# Single and multiple file upload -# File upload with missing name (check standards) -# mailto: submission & enctype text/plain?? - -# Replace by_label etc. with moniker / selector concept. Allows, e.g., a -# choice between selection by value / id / label / element contents. Or -# choice between matching labels exactly or by substring. etc. - - -__all__ = ['AmbiguityError', 'CheckboxControl', 'Control', - 'ControlNotFoundError', 'FileControl', 'FormParser', 'HTMLForm', - 'HiddenControl', 'IgnoreControl', 'ImageControl', 'IsindexControl', - 'Item', 'ItemCountError', 'ItemNotFoundError', 'Label', - 'ListControl', 'LocateError', 'Missing', 'ParseError', 'ParseFile', - 'ParseFileEx', 'ParseResponse', 'ParseResponseEx','PasswordControl', - 'RadioControl', 'ScalarControl', 'SelectControl', - 'SubmitButtonControl', 'SubmitControl', 'TextControl', - 'TextareaControl', 'XHTMLCompatibleFormParser'] - -import HTMLParser -from cStringIO import StringIO -import inspect -import logging -import random -import re -import sys -import urllib -import urlparse -import warnings - -import _beautifulsoup -import _request - -# from Python itself, for backwards compatibility of raised exceptions -import sgmllib -# bundled copy of sgmllib -import _sgmllib_copy - - -VERSION = "0.2.11" - -CHUNK = 1024 # size of chunks fed to parser, in bytes - -DEFAULT_ENCODING = "latin-1" - -_logger = logging.getLogger("mechanize.forms") -OPTIMIZATION_HACK = True - -def debug(msg, *args, **kwds): - if OPTIMIZATION_HACK: - return - - caller_name = inspect.stack()[1][3] - extended_msg = '%%s %s' % msg - extended_args = (caller_name,)+args - _logger.debug(extended_msg, *extended_args, **kwds) - -def _show_debug_messages(): - global OPTIMIZATION_HACK - OPTIMIZATION_HACK = False - _logger.setLevel(logging.DEBUG) - handler = logging.StreamHandler(sys.stdout) - handler.setLevel(logging.DEBUG) - _logger.addHandler(handler) - - -def deprecation(message, stack_offset=0): - warnings.warn(message, DeprecationWarning, stacklevel=3+stack_offset) - - -class Missing: pass - -_compress_re = re.compile(r"\s+") -def compress_text(text): return _compress_re.sub(" ", text.strip()) - -def normalize_line_endings(text): - return re.sub(r"(?:(?<!\r)\n)|(?:\r(?!\n))", "\r\n", text) - - -def unescape(data, entities, encoding=DEFAULT_ENCODING): - if data is None or "&" not in data: - return data - - def replace_entities(match, entities=entities, encoding=encoding): - ent = match.group() - if ent[1] == "#": - return unescape_charref(ent[2:-1], encoding) - - repl = entities.get(ent) - if repl is not None: - if type(repl) != type(""): - try: - repl = repl.encode(encoding) - except UnicodeError: - repl = ent - else: - repl = ent - - return repl - - return re.sub(r"&#?[A-Za-z0-9]+?;", replace_entities, data) - -def unescape_charref(data, encoding): - name, base = data, 10 - if name.startswith("x"): - name, base= name[1:], 16 - uc = unichr(int(name, base)) - if encoding is None: - return uc - else: - try: - repl = uc.encode(encoding) - except UnicodeError: - repl = "&#%s;" % data - return repl - -def get_entitydefs(): - import htmlentitydefs - from codecs import latin_1_decode - entitydefs = {} - try: - htmlentitydefs.name2codepoint - except AttributeError: - entitydefs = {} - for name, char in htmlentitydefs.entitydefs.items(): - uc = latin_1_decode(char)[0] - if uc.startswith("&#") and uc.endswith(";"): - uc = unescape_charref(uc[2:-1], None) - entitydefs["&%s;" % name] = uc - else: - for name, codepoint in htmlentitydefs.name2codepoint.items(): - entitydefs["&%s;" % name] = unichr(codepoint) - return entitydefs - - -def issequence(x): - try: - x[0] - except (TypeError, KeyError): - return False - except IndexError: - pass - return True - -def isstringlike(x): - try: x+"" - except: return False - else: return True - - -def choose_boundary(): - """Return a string usable as a multipart boundary.""" - # follow IE and firefox - nonce = "".join([str(random.randint(0, sys.maxint-1)) for i in 0,1,2]) - return "-"*27 + nonce - -# This cut-n-pasted MimeWriter from standard library is here so can add -# to HTTP headers rather than message body when appropriate. It also uses -# \r\n in place of \n. This is a bit nasty. -class MimeWriter: - - """Generic MIME writer. - - Methods: - - __init__() - addheader() - flushheaders() - startbody() - startmultipartbody() - nextpart() - lastpart() - - A MIME writer is much more primitive than a MIME parser. It - doesn't seek around on the output file, and it doesn't use large - amounts of buffer space, so you have to write the parts in the - order they should occur on the output file. It does buffer the - headers you add, allowing you to rearrange their order. - - General usage is: - - f = <open the output file> - w = MimeWriter(f) - ...call w.addheader(key, value) 0 or more times... - - followed by either: - - f = w.startbody(content_type) - ...call f.write(data) for body data... - - or: - - w.startmultipartbody(subtype) - for each part: - subwriter = w.nextpart() - ...use the subwriter's methods to create the subpart... - w.lastpart() - - The subwriter is another MimeWriter instance, and should be - treated in the same way as the toplevel MimeWriter. This way, - writing recursive body parts is easy. - - Warning: don't forget to call lastpart()! - - XXX There should be more state so calls made in the wrong order - are detected. - - Some special cases: - - - startbody() just returns the file passed to the constructor; - but don't use this knowledge, as it may be changed. - - - startmultipartbody() actually returns a file as well; - this can be used to write the initial 'if you can read this your - mailer is not MIME-aware' message. - - - If you call flushheaders(), the headers accumulated so far are - written out (and forgotten); this is useful if you don't need a - body part at all, e.g. for a subpart of type message/rfc822 - that's (mis)used to store some header-like information. - - - Passing a keyword argument 'prefix=<flag>' to addheader(), - start*body() affects where the header is inserted; 0 means - append at the end, 1 means insert at the start; default is - append for addheader(), but insert for start*body(), which use - it to determine where the Content-type header goes. - - """ - - def __init__(self, fp, http_hdrs=None): - self._http_hdrs = http_hdrs - self._fp = fp - self._headers = [] - self._boundary = [] - self._first_part = True - - def addheader(self, key, value, prefix=0, - add_to_http_hdrs=0): - """ - prefix is ignored if add_to_http_hdrs is true. - """ - lines = value.split("\r\n") - while lines and not lines[-1]: del lines[-1] - while lines and not lines[0]: del lines[0] - if add_to_http_hdrs: - value = "".join(lines) - # 2.2 urllib2 doesn't normalize header case - self._http_hdrs.append((key.capitalize(), value)) - else: - for i in range(1, len(lines)): - lines[i] = " " + lines[i].strip() - value = "\r\n".join(lines) + "\r\n" - line = key.title() + ": " + value - if prefix: - self._headers.insert(0, line) - else: - self._headers.append(line) - - def flushheaders(self): - self._fp.writelines(self._headers) - self._headers = [] - - def startbody(self, ctype=None, plist=[], prefix=1, - add_to_http_hdrs=0, content_type=1): - """ - prefix is ignored if add_to_http_hdrs is true. - """ - if content_type and ctype: - for name, value in plist: - ctype = ctype + ';\r\n %s=%s' % (name, value) - self.addheader("Content-Type", ctype, prefix=prefix, - add_to_http_hdrs=add_to_http_hdrs) - self.flushheaders() - if not add_to_http_hdrs: self._fp.write("\r\n") - self._first_part = True - return self._fp - - def startmultipartbody(self, subtype, boundary=None, plist=[], prefix=1, - add_to_http_hdrs=0, content_type=1): - boundary = boundary or choose_boundary() - self._boundary.append(boundary) - return self.startbody("multipart/" + subtype, - [("boundary", boundary)] + plist, - prefix=prefix, - add_to_http_hdrs=add_to_http_hdrs, - content_type=content_type) - - def nextpart(self): - boundary = self._boundary[-1] - if self._first_part: - self._first_part = False - else: - self._fp.write("\r\n") - self._fp.write("--" + boundary + "\r\n") - return self.__class__(self._fp) - - def lastpart(self): - if self._first_part: - self.nextpart() - boundary = self._boundary.pop() - self._fp.write("\r\n--" + boundary + "--\r\n") - - -class LocateError(ValueError): pass -class AmbiguityError(LocateError): pass -class ControlNotFoundError(LocateError): pass -class ItemNotFoundError(LocateError): pass - -class ItemCountError(ValueError): pass - -# for backwards compatibility, ParseError derives from exceptions that were -# raised by versions of ClientForm <= 0.2.5 -# TODO: move to _html -class ParseError(sgmllib.SGMLParseError, - HTMLParser.HTMLParseError): - - def __init__(self, *args, **kwds): - Exception.__init__(self, *args, **kwds) - - def __str__(self): - return Exception.__str__(self) - - -class _AbstractFormParser: - """forms attribute contains HTMLForm instances on completion.""" - # thanks to Moshe Zadka for an example of sgmllib/htmllib usage - def __init__(self, entitydefs=None, encoding=DEFAULT_ENCODING): - if entitydefs is None: - entitydefs = get_entitydefs() - self._entitydefs = entitydefs - self._encoding = encoding - - self.base = None - self.forms = [] - self.labels = [] - self._current_label = None - self._current_form = None - self._select = None - self._optgroup = None - self._option = None - self._textarea = None - - # forms[0] will contain all controls that are outside of any form - # self._global_form is an alias for self.forms[0] - self._global_form = None - self.start_form([]) - self.end_form() - self._current_form = self._global_form = self.forms[0] - - def do_base(self, attrs): - debug("%s", attrs) - for key, value in attrs: - if key == "href": - self.base = self.unescape_attr_if_required(value) - - def end_body(self): - debug("") - if self._current_label is not None: - self.end_label() - if self._current_form is not self._global_form: - self.end_form() - - def start_form(self, attrs): - debug("%s", attrs) - if self._current_form is not self._global_form: - raise ParseError("nested FORMs") - name = None - action = None - enctype = "application/x-www-form-urlencoded" - method = "GET" - d = {} - for key, value in attrs: - if key == "name": - name = self.unescape_attr_if_required(value) - elif key == "action": - action = self.unescape_attr_if_required(value) - elif key == "method": - method = self.unescape_attr_if_required(value.upper()) - elif key == "enctype": - enctype = self.unescape_attr_if_required(value.lower()) - d[key] = self.unescape_attr_if_required(value) - controls = [] - self._current_form = (name, action, method, enctype), d, controls - - def end_form(self): - debug("") - if self._current_label is not None: - self.end_label() - if self._current_form is self._global_form: - raise ParseError("end of FORM before start") - self.forms.append(self._current_form) - self._current_form = self._global_form - - def start_select(self, attrs): - debug("%s", attrs) - if self._select is not None: - raise ParseError("nested SELECTs") - if self._textarea is not None: - raise ParseError("SELECT inside TEXTAREA") - d = {} - for key, val in attrs: - d[key] = self.unescape_attr_if_required(val) - - self._select = d - self._add_label(d) - - self._append_select_control({"__select": d}) - - def end_select(self): - debug("") - if self._select is None: - raise ParseError("end of SELECT before start") - - if self._option is not None: - self._end_option() - - self._select = None - - def start_optgroup(self, attrs): - debug("%s", attrs) - if self._select is None: - raise ParseError("OPTGROUP outside of SELECT") - d = {} - for key, val in attrs: - d[key] = self.unescape_attr_if_required(val) - - self._optgroup = d - - def end_optgroup(self): - debug("") - if self._optgroup is None: - raise ParseError("end of OPTGROUP before start") - self._optgroup = None - - def _start_option(self, attrs): - debug("%s", attrs) - if self._select is None: - raise ParseError("OPTION outside of SELECT") - if self._option is not None: - self._end_option() - - d = {} - for key, val in attrs: - d[key] = self.unescape_attr_if_required(val) - - self._option = {} - self._option.update(d) - if (self._optgroup and self._optgroup.has_key("disabled") and - not self._option.has_key("disabled")): - self._option["disabled"] = None - - def _end_option(self): - debug("") - if self._option is None: - raise ParseError("end of OPTION before start") - - contents = self._option.get("contents", "").strip() - self._option["contents"] = contents - if not self._option.has_key("value"): - self._option["value"] = contents - if not self._option.has_key("label"): - self._option["label"] = contents - # stuff dict of SELECT HTML attrs into a special private key - # (gets deleted again later) - self._option["__select"] = self._select - self._append_select_control(self._option) - self._option = None - - def _append_select_control(self, attrs): - debug("%s", attrs) - controls = self._current_form[2] - name = self._select.get("name") - controls.append(("select", name, attrs)) - - def start_textarea(self, attrs): - debug("%s", attrs) - if self._textarea is not None: - raise ParseError("nested TEXTAREAs") - if self._select is not None: - raise ParseError("TEXTAREA inside SELECT") - d = {} - for key, val in attrs: - d[key] = self.unescape_attr_if_required(val) - self._add_label(d) - - self._textarea = d - - def end_textarea(self): - debug("") - if self._textarea is None: - raise ParseError("end of TEXTAREA before start") - controls = self._current_form[2] - name = self._textarea.get("name") - controls.append(("textarea", name, self._textarea)) - self._textarea = None - - def start_label(self, attrs): - debug("%s", attrs) - if self._current_label: - self.end_label() - d = {} - for key, val in attrs: - d[key] = self.unescape_attr_if_required(val) - taken = bool(d.get("for")) # empty id is invalid - d["__text"] = "" - d["__taken"] = taken - if taken: - self.labels.append(d) - self._current_label = d - - def end_label(self): - debug("") - label = self._current_label - if label is None: - # something is ugly in the HTML, but we're ignoring it - return - self._current_label = None - # if it is staying around, it is True in all cases - del label["__taken"] - - def _add_label(self, d): - #debug("%s", d) - if self._current_label is not None: - if not self._current_label["__taken"]: - self._current_label["__taken"] = True - d["__label"] = self._current_label - - def handle_data(self, data): - debug("%s", data) - - if self._option is not None: - # self._option is a dictionary of the OPTION element's HTML - # attributes, but it has two special keys, one of which is the - # special "contents" key contains text between OPTION tags (the - # other is the "__select" key: see the end_option method) - map = self._option - key = "contents" - elif self._textarea is not None: - map = self._textarea - key = "value" - data = normalize_line_endings(data) - # not if within option or textarea - elif self._current_label is not None: - map = self._current_label - key = "__text" - else: - return - - if data and not map.has_key(key): - # according to - # http://www.w3.org/TR/html4/appendix/notes.html#h-B.3.1 line break - # immediately after start tags or immediately before end tags must - # be ignored, but real browsers only ignore a line break after a - # start tag, so we'll do that. - if data[0:2] == "\r\n": - data = data[2:] - elif data[0:1] in ["\n", "\r"]: - data = data[1:] - map[key] = data - else: - map[key] = map[key] + data - - def do_button(self, attrs): - debug("%s", attrs) - d = {} - d["type"] = "submit" # default - for key, val in attrs: - d[key] = self.unescape_attr_if_required(val) - controls = self._current_form[2] - - type = d["type"] - name = d.get("name") - # we don't want to lose information, so use a type string that - # doesn't clash with INPUT TYPE={SUBMIT,RESET,BUTTON} - # e.g. type for BUTTON/RESET is "resetbutton" - # (type for INPUT/RESET is "reset") - type = type+"button" - self._add_label(d) - controls.append((type, name, d)) - - def do_input(self, attrs): - debug("%s", attrs) - d = {} - d["type"] = "text" # default - for key, val in attrs: - d[key] = self.unescape_attr_if_required(val) - controls = self._current_form[2] - - type = d["type"] - name = d.get("name") - self._add_label(d) - controls.append((type, name, d)) - - def do_isindex(self, attrs): - debug("%s", attrs) - d = {} - for key, val in attrs: - d[key] = self.unescape_attr_if_required(val) - controls = self._current_form[2] - - self._add_label(d) - # isindex doesn't have type or name HTML attributes - controls.append(("isindex", None, d)) - - def handle_entityref(self, name): - #debug("%s", name) - self.handle_data(unescape( - '&%s;' % name, self._entitydefs, self._encoding)) - - def handle_charref(self, name): - #debug("%s", name) - self.handle_data(unescape_charref(name, self._encoding)) - - def unescape_attr(self, name): - #debug("%s", name) - return unescape(name, self._entitydefs, self._encoding) - - def unescape_attrs(self, attrs): - #debug("%s", attrs) - escaped_attrs = {} - for key, val in attrs.items(): - try: - val.items - except AttributeError: - escaped_attrs[key] = self.unescape_attr(val) - else: - # e.g. "__select" -- yuck! - escaped_attrs[key] = self.unescape_attrs(val) - return escaped_attrs - - def unknown_entityref(self, ref): self.handle_data("&%s;" % ref) - def unknown_charref(self, ref): self.handle_data("&#%s;" % ref) - - -class XHTMLCompatibleFormParser(_AbstractFormParser, HTMLParser.HTMLParser): - """Good for XHTML, bad for tolerance of incorrect HTML.""" - # thanks to Michael Howitz for this! - def __init__(self, entitydefs=None, encoding=DEFAULT_ENCODING): - HTMLParser.HTMLParser.__init__(self) - _AbstractFormParser.__init__(self, entitydefs, encoding) - - def feed(self, data): - try: - HTMLParser.HTMLParser.feed(self, data) - except HTMLParser.HTMLParseError, exc: - raise ParseError(exc) - - def start_option(self, attrs): - _AbstractFormParser._start_option(self, attrs) - - def end_option(self): - _AbstractFormParser._end_option(self) - - def handle_starttag(self, tag, attrs): - try: - method = getattr(self, "start_" + tag) - except AttributeError: - try: - method = getattr(self, "do_" + tag) - except AttributeError: - pass # unknown tag - else: - method(attrs) - else: - method(attrs) - - def handle_endtag(self, tag): - try: - method = getattr(self, "end_" + tag) - except AttributeError: - pass # unknown tag - else: - method() - - def unescape(self, name): - # Use the entitydefs passed into constructor, not - # HTMLParser.HTMLParser's entitydefs. - return self.unescape_attr(name) - - def unescape_attr_if_required(self, name): - return name # HTMLParser.HTMLParser already did it - def unescape_attrs_if_required(self, attrs): - return attrs # ditto - - def close(self): - HTMLParser.HTMLParser.close(self) - self.end_body() - - -class _AbstractSgmllibParser(_AbstractFormParser): - - def do_option(self, attrs): - _AbstractFormParser._start_option(self, attrs) - - # we override this attr to decode hex charrefs - entity_or_charref = re.compile( - '&(?:([a-zA-Z][-.a-zA-Z0-9]*)|#(x?[0-9a-fA-F]+))(;?)') - def convert_entityref(self, name): - return unescape("&%s;" % name, self._entitydefs, self._encoding) - def convert_charref(self, name): - return unescape_charref("%s" % name, self._encoding) - def unescape_attr_if_required(self, name): - return name # sgmllib already did it - def unescape_attrs_if_required(self, attrs): - return attrs # ditto - - -class FormParser(_AbstractSgmllibParser, _sgmllib_copy.SGMLParser): - """Good for tolerance of incorrect HTML, bad for XHTML.""" - def __init__(self, entitydefs=None, encoding=DEFAULT_ENCODING): - _sgmllib_copy.SGMLParser.__init__(self) - _AbstractFormParser.__init__(self, entitydefs, encoding) - - def feed(self, data): - try: - _sgmllib_copy.SGMLParser.feed(self, data) - except _sgmllib_copy.SGMLParseError, exc: - raise ParseError(exc) - - def close(self): - _sgmllib_copy.SGMLParser.close(self) - self.end_body() - - -class _AbstractBSFormParser(_AbstractSgmllibParser): - - bs_base_class = None - - def __init__(self, entitydefs=None, encoding=DEFAULT_ENCODING): - _AbstractFormParser.__init__(self, entitydefs, encoding) - self.bs_base_class.__init__(self) - - def handle_data(self, data): - _AbstractFormParser.handle_data(self, data) - self.bs_base_class.handle_data(self, data) - - def feed(self, data): - try: - self.bs_base_class.feed(self, data) - except _sgmllib_copy.SGMLParseError, exc: - raise ParseError(exc) - - def close(self): - self.bs_base_class.close(self) - self.end_body() - - -class RobustFormParser(_AbstractBSFormParser, _beautifulsoup.BeautifulSoup): - - """Tries to be highly tolerant of incorrect HTML.""" - - bs_base_class = _beautifulsoup.BeautifulSoup - - -class NestingRobustFormParser(_AbstractBSFormParser, - _beautifulsoup.ICantBelieveItsBeautifulSoup): - - """Tries to be highly tolerant of incorrect HTML. - - Different from RobustFormParser in that it more often guesses nesting - above missing end tags (see BeautifulSoup docs). - """ - - bs_base_class = _beautifulsoup.ICantBelieveItsBeautifulSoup - - -#FormParser = XHTMLCompatibleFormParser # testing hack -#FormParser = RobustFormParser # testing hack - - -def ParseResponseEx(response, - select_default=False, - form_parser_class=FormParser, - request_class=_request.Request, - entitydefs=None, - encoding=DEFAULT_ENCODING, - - # private - _urljoin=urlparse.urljoin, - _urlparse=urlparse.urlparse, - _urlunparse=urlparse.urlunparse, - ): - """Identical to ParseResponse, except that: - - 1. The returned list contains an extra item. The first form in the list - contains all controls not contained in any FORM element. - - 2. The arguments ignore_errors and backwards_compat have been removed. - - 3. Backwards-compatibility mode (backwards_compat=True) is not available. - """ - return _ParseFileEx(response, response.geturl(), - select_default, - False, - form_parser_class, - request_class, - entitydefs, - False, - encoding, - _urljoin=_urljoin, - _urlparse=_urlparse, - _urlunparse=_urlunparse, - ) - -def ParseFileEx(file, base_uri, - select_default=False, - form_parser_class=FormParser, - request_class=_request.Request, - entitydefs=None, - encoding=DEFAULT_ENCODING, - - # private - _urljoin=urlparse.urljoin, - _urlparse=urlparse.urlparse, - _urlunparse=urlparse.urlunparse, - ): - """Identical to ParseFile, except that: - - 1. The returned list contains an extra item. The first form in the list - contains all controls not contained in any FORM element. - - 2. The arguments ignore_errors and backwards_compat have been removed. - - 3. Backwards-compatibility mode (backwards_compat=True) is not available. - """ - return _ParseFileEx(file, base_uri, - select_default, - False, - form_parser_class, - request_class, - entitydefs, - False, - encoding, - _urljoin=_urljoin, - _urlparse=_urlparse, - _urlunparse=_urlunparse, - ) - -def ParseString(text, base_uri, *args, **kwds): - fh = StringIO(text) - return ParseFileEx(fh, base_uri, *args, **kwds) - -def ParseResponse(response, *args, **kwds): - """Parse HTTP response and return a list of HTMLForm instances. - - The return value of mechanize.urlopen can be conveniently passed to this - function as the response parameter. - - mechanize.ParseError is raised on parse errors. - - response: file-like object (supporting read() method) with a method - geturl(), returning the URI of the HTTP response - select_default: for multiple-selection SELECT controls and RADIO controls, - pick the first item as the default if none are selected in the HTML - form_parser_class: class to instantiate and use to pass - request_class: class to return from .click() method (default is - mechanize.Request) - entitydefs: mapping like {"&": "&", ...} containing HTML entity - definitions (a sensible default is used) - encoding: character encoding used for encoding numeric character references - when matching link text. mechanize does not attempt to find the encoding - in a META HTTP-EQUIV attribute in the document itself (mechanize, for - example, does do that and will pass the correct value to mechanize using - this parameter). - - backwards_compat: boolean that determines whether the returned HTMLForm - objects are backwards-compatible with old code. If backwards_compat is - true: - - - ClientForm 0.1 code will continue to work as before. - - - Label searches that do not specify a nr (number or count) will always - get the first match, even if other controls match. If - backwards_compat is False, label searches that have ambiguous results - will raise an AmbiguityError. - - - Item label matching is done by strict string comparison rather than - substring matching. - - - De-selecting individual list items is allowed even if the Item is - disabled. - - The backwards_compat argument will be removed in a future release. - - Pass a true value for select_default if you want the behaviour specified by - RFC 1866 (the HTML 2.0 standard), which is to select the first item in a - RADIO or multiple-selection SELECT control if none were selected in the - HTML. Most browsers (including Microsoft Internet Explorer (IE) and - Netscape Navigator) instead leave all items unselected in these cases. The - W3C HTML 4.0 standard leaves this behaviour undefined in the case of - multiple-selection SELECT controls, but insists that at least one RADIO - button should be checked at all times, in contradiction to browser - behaviour. - - There is a choice of parsers. mechanize.XHTMLCompatibleFormParser (uses - HTMLParser.HTMLParser) works best for XHTML, mechanize.FormParser (uses - bundled copy of sgmllib.SGMLParser) (the default) works better for ordinary - grubby HTML. Note that HTMLParser is only available in Python 2.2 and - later. You can pass your own class in here as a hack to work around bad - HTML, but at your own risk: there is no well-defined interface. - - """ - return _ParseFileEx(response, response.geturl(), *args, **kwds)[1:] - -def ParseFile(file, base_uri, *args, **kwds): - """Parse HTML and return a list of HTMLForm instances. - - mechanize.ParseError is raised on parse errors. - - file: file-like object (supporting read() method) containing HTML with zero - or more forms to be parsed - base_uri: the URI of the document (note that the base URI used to submit - the form will be that given in the BASE element if present, not that of - the document) - - For the other arguments and further details, see ParseResponse.__doc__. - - """ - return _ParseFileEx(file, base_uri, *args, **kwds)[1:] - -def _ParseFileEx(file, base_uri, - select_default=False, - ignore_errors=False, - form_parser_class=FormParser, - request_class=_request.Request, - entitydefs=None, - backwards_compat=True, - encoding=DEFAULT_ENCODING, - _urljoin=urlparse.urljoin, - _urlparse=urlparse.urlparse, - _urlunparse=urlparse.urlunparse, - ): - if backwards_compat: - deprecation("operating in backwards-compatibility mode", 1) - fp = form_parser_class(entitydefs, encoding) - while 1: - data = file.read(CHUNK) - try: - fp.feed(data) - except ParseError, e: - e.base_uri = base_uri - raise - if len(data) != CHUNK: break - fp.close() - if fp.base is not None: - # HTML BASE element takes precedence over document URI - base_uri = fp.base - labels = [] # Label(label) for label in fp.labels] - id_to_labels = {} - for l in fp.labels: - label = Label(l) - labels.append(label) - for_id = l["for"] - coll = id_to_labels.get(for_id) - if coll is None: - id_to_labels[for_id] = [label] - else: - coll.append(label) - forms = [] - for (name, action, method, enctype), attrs, controls in fp.forms: - if action is None: - action = base_uri - else: - action = _urljoin(base_uri, action) - # would be nice to make HTMLForm class (form builder) pluggable - form = HTMLForm( - action, method, enctype, name, attrs, request_class, - forms, labels, id_to_labels, backwards_compat) - form._urlparse = _urlparse - form._urlunparse = _urlunparse - for ii in range(len(controls)): - type, name, attrs = controls[ii] - # index=ii*10 allows ImageControl to return multiple ordered pairs - form.new_control( - type, name, attrs, select_default=select_default, index=ii*10) - forms.append(form) - for form in forms: - form.fixup() - return forms - - -class Label: - def __init__(self, attrs): - self.id = attrs.get("for") - self._text = attrs.get("__text").strip() - self._ctext = compress_text(self._text) - self.attrs = attrs - self._backwards_compat = False # maintained by HTMLForm - - def __getattr__(self, name): - if name == "text": - if self._backwards_compat: - return self._text - else: - return self._ctext - return getattr(Label, name) - - def __setattr__(self, name, value): - if name == "text": - # don't see any need for this, so make it read-only - raise AttributeError("text attribute is read-only") - self.__dict__[name] = value - - def __str__(self): - return "<Label(id=%r, text=%r)>" % (self.id, self.text) - - -def _get_label(attrs): - text = attrs.get("__label") - if text is not None: - return Label(text) - else: - return None - -class Control: - """An HTML form control. - - An HTMLForm contains a sequence of Controls. The Controls in an HTMLForm - are accessed using the HTMLForm.find_control method or the - HTMLForm.controls attribute. - - Control instances are usually constructed using the ParseFile / - ParseResponse functions. If you use those functions, you can ignore the - rest of this paragraph. A Control is only properly initialised after the - fixup method has been called. In fact, this is only strictly necessary for - ListControl instances. This is necessary because ListControls are built up - from ListControls each containing only a single item, and their initial - value(s) can only be known after the sequence is complete. - - The types and values that are acceptable for assignment to the value - attribute are defined by subclasses. - - If the disabled attribute is true, this represents the state typically - represented by browsers by 'greying out' a control. If the disabled - attribute is true, the Control will raise AttributeError if an attempt is - made to change its value. In addition, the control will not be considered - 'successful' as defined by the W3C HTML 4 standard -- ie. it will - contribute no data to the return value of the HTMLForm.click* methods. To - enable a control, set the disabled attribute to a false value. - - If the readonly attribute is true, the Control will raise AttributeError if - an attempt is made to change its value. To make a control writable, set - the readonly attribute to a false value. - - All controls have the disabled and readonly attributes, not only those that - may have the HTML attributes of the same names. - - On assignment to the value attribute, the following exceptions are raised: - TypeError, AttributeError (if the value attribute should not be assigned - to, because the control is disabled, for example) and ValueError. - - If the name or value attributes are None, or the value is an empty list, or - if the control is disabled, the control is not successful. - - Public attributes: - - type: string describing type of control (see the keys of the - HTMLForm.type2class dictionary for the allowable values) (readonly) - name: name of control (readonly) - value: current value of control (subclasses may allow a single value, a - sequence of values, or either) - disabled: disabled state - readonly: readonly state - id: value of id HTML attribute - - """ - def __init__(self, type, name, attrs, index=None): - """ - type: string describing type of control (see the keys of the - HTMLForm.type2class dictionary for the allowable values) - name: control name - attrs: HTML attributes of control's HTML element - - """ - raise NotImplementedError() - - def add_to_form(self, form): - self._form = form - form.controls.append(self) - - def fixup(self): - pass - - def is_of_kind(self, kind): - raise NotImplementedError() - - def clear(self): - raise NotImplementedError() - - def __getattr__(self, name): raise NotImplementedError() - def __setattr__(self, name, value): raise NotImplementedError() - - def pairs(self): - """Return list of (key, value) pairs suitable for passing to urlencode. - """ - return [(k, v) for (i, k, v) in self._totally_ordered_pairs()] - - def _totally_ordered_pairs(self): - """Return list of (key, value, index) tuples. - - Like pairs, but allows preserving correct ordering even where several - controls are involved. - - """ - raise NotImplementedError() - - def _write_mime_data(self, mw, name, value): - """Write data for a subitem of this control to a MimeWriter.""" - # called by HTMLForm - mw2 = mw.nextpart() - mw2.addheader("Content-Disposition", - 'form-data; name="%s"' % name, 1) - f = mw2.startbody(prefix=0) - f.write(value) - - def __str__(self): - raise NotImplementedError() - - def get_labels(self): - """Return all labels (Label instances) for this control. - - If the control was surrounded by a <label> tag, that will be the first - label; all other labels, connected by 'for' and 'id', are in the order - that appear in the HTML. - - """ - res = [] - if self._label: - res.append(self._label) - if self.id: - res.extend(self._form._id_to_labels.get(self.id, ())) - return res - - -#--------------------------------------------------- -class ScalarControl(Control): - """Control whose value is not restricted to one of a prescribed set. - - Some ScalarControls don't accept any value attribute. Otherwise, takes a - single value, which must be string-like. - - Additional read-only public attribute: - - attrs: dictionary mapping the names of original HTML attributes of the - control to their values - - """ - def __init__(self, type, name, attrs, index=None): - self._index = index - self._label = _get_label(attrs) - self.__dict__["type"] = type.lower() - self.__dict__["name"] = name - self._value = attrs.get("value") - self.disabled = attrs.has_key("disabled") - self.readonly = attrs.has_key("readonly") - self.id = attrs.get("id") - - self.attrs = attrs.copy() - - self._clicked = False - - self._urlparse = urlparse.urlparse - self._urlunparse = urlparse.urlunparse - - def __getattr__(self, name): - if name == "value": - return self.__dict__["_value"] - else: - raise AttributeError("%s instance has no attribute '%s'" % - (self.__class__.__name__, name)) - - def __setattr__(self, name, value): - if name == "value": - if not isstringlike(value): - raise TypeError("must assign a string") - elif self.readonly: - raise AttributeError("control '%s' is readonly" % self.name) - elif self.disabled: - raise AttributeError("control '%s' is disabled" % self.name) - self.__dict__["_value"] = value - elif name in ("name", "type"): - raise AttributeError("%s attribute is readonly" % name) - else: - self.__dict__[name] = value - - def _totally_ordered_pairs(self): - name = self.name - value = self.value - if name is None or value is None or self.disabled: - return [] - return [(self._index, name, value)] - - def clear(self): - if self.readonly: - raise AttributeError("control '%s' is readonly" % self.name) - self.__dict__["_value"] = None - - def __str__(self): - name = self.name - value = self.value - if name is None: name = "<None>" - if value is None: value = "<None>" - - infos = [] - if self.disabled: infos.append("disabled") - if self.readonly: infos.append("readonly") - info = ", ".join(infos) - if info: info = " (%s)" % info - - return "<%s(%s=%s)%s>" % (self.__class__.__name__, name, value, info) - - -#--------------------------------------------------- -class TextControl(ScalarControl): - """Textual input control. - - Covers: - - INPUT/TEXT - INPUT/PASSWORD - INPUT/HIDDEN - TEXTAREA - - """ - def __init__(self, type, name, attrs, index=None): - ScalarControl.__init__(self, type, name, attrs, index) - if self.type == "hidden": self.readonly = True - if self._value is None: - self._value = "" - - def is_of_kind(self, kind): return kind == "text" - -#--------------------------------------------------- -class FileControl(ScalarControl): - """File upload with INPUT TYPE=FILE. - - The value attribute of a FileControl is always None. Use add_file instead. - - Additional public method: add_file - - """ - - def __init__(self, type, name, attrs, index=None): - ScalarControl.__init__(self, type, name, attrs, index) - self._value = None - self._upload_data = [] - - def is_of_kind(self, kind): return kind == "file" - - def clear(self): - if self.readonly: - raise AttributeError("control '%s' is readonly" % self.name) - self._upload_data = [] - - def __setattr__(self, name, value): - if name in ("value", "name", "type"): - raise AttributeError("%s attribute is readonly" % name) - else: - self.__dict__[name] = value - - def add_file(self, file_object, content_type=None, filename=None): - if not hasattr(file_object, "read"): - raise TypeError("file-like object must have read method") - if content_type is not None and not isstringlike(content_type): - raise TypeError("content type must be None or string-like") - if filename is not None and not isstringlike(filename): - raise TypeError("filename must be None or string-like") - if content_type is None: - content_type = "application/octet-stream" - self._upload_data.append((file_object, content_type, filename)) - - def _totally_ordered_pairs(self): - # XXX should it be successful even if unnamed? - if self.name is None or self.disabled: - return [] - return [(self._index, self.name, "")] - - # If enctype is application/x-www-form-urlencoded and there's a FILE - # control present, what should be sent? Strictly, it should be 'name=data' - # (see HTML 4.01 spec., section 17.13.2), but code sends "name=" ATM. What - # about multiple file upload? - def _write_mime_data(self, mw, _name, _value): - # called by HTMLForm - # assert _name == self.name and _value == '' - if len(self._upload_data) < 2: - if len(self._upload_data) == 0: - file_object = StringIO() - content_type = "application/octet-stream" - filename = "" - else: - file_object, content_type, filename = self._upload_data[0] - if filename is None: - filename = "" - mw2 = mw.nextpart() - fn_part = '; filename="%s"' % filename - disp = 'form-data; name="%s"%s' % (self.name, fn_part) - mw2.addheader("Content-Disposition", disp, prefix=1) - fh = mw2.startbody(content_type, prefix=0) - fh.write(file_object.read()) - else: - # multiple files - mw2 = mw.nextpart() - disp = 'form-data; name="%s"' % self.name - mw2.addheader("Content-Disposition", disp, prefix=1) - fh = mw2.startmultipartbody("mixed", prefix=0) - for file_object, content_type, filename in self._upload_data: - mw3 = mw2.nextpart() - if filename is None: - filename = "" - fn_part = '; filename="%s"' % filename - disp = "file%s" % fn_part - mw3.addheader("Content-Disposition", disp, prefix=1) - fh2 = mw3.startbody(content_type, prefix=0) - fh2.write(file_object.read()) - mw2.lastpart() - - def __str__(self): - name = self.name - if name is None: name = "<None>" - - if not self._upload_data: - value = "<No files added>" - else: - value = [] - for file, ctype, filename in self._upload_data: - if filename is None: - value.append("<Unnamed file>") - else: - value.append(filename) - value = ", ".join(value) - - info = [] - if self.disabled: info.append("disabled") - if self.readonly: info.append("readonly") - info = ", ".join(info) - if info: info = " (%s)" % info - - return "<%s(%s=%s)%s>" % (self.__class__.__name__, name, value, info) - - -#--------------------------------------------------- -class IsindexControl(ScalarControl): - """ISINDEX control. - - ISINDEX is the odd-one-out of HTML form controls. In fact, it isn't really - part of regular HTML forms at all, and predates it. You're only allowed - one ISINDEX per HTML document. ISINDEX and regular form submission are - mutually exclusive -- either submit a form, or the ISINDEX. - - Having said this, since ISINDEX controls may appear in forms (which is - probably bad HTML), ParseFile / ParseResponse will include them in the - HTMLForm instances it returns. You can set the ISINDEX's value, as with - any other control (but note that ISINDEX controls have no name, so you'll - need to use the type argument of set_value!). When you submit the form, - the ISINDEX will not be successful (ie., no data will get returned to the - server as a result of its presence), unless you click on the ISINDEX - control, in which case the ISINDEX gets submitted instead of the form: - - form.set_value("my isindex value", type="isindex") - mechanize.urlopen(form.click(type="isindex")) - - ISINDEX elements outside of FORMs are ignored. If you want to submit one - by hand, do it like so: - - url = urlparse.urljoin(page_uri, "?"+urllib.quote_plus("my isindex value")) - result = mechanize.urlopen(url) - - """ - def __init__(self, type, name, attrs, index=None): - ScalarControl.__init__(self, type, name, attrs, index) - if self._value is None: - self._value = "" - - def is_of_kind(self, kind): return kind in ["text", "clickable"] - - def _totally_ordered_pairs(self): - return [] - - def _click(self, form, coord, return_type, request_class=_request.Request): - # Relative URL for ISINDEX submission: instead of "foo=bar+baz", - # want "bar+baz". - # This doesn't seem to be specified in HTML 4.01 spec. (ISINDEX is - # deprecated in 4.01, but it should still say how to submit it). - # Submission of ISINDEX is explained in the HTML 3.2 spec, though. - parts = self._urlparse(form.action) - rest, (query, frag) = parts[:-2], parts[-2:] - parts = rest + (urllib.quote_plus(self.value), None) - url = self._urlunparse(parts) - req_data = url, None, [] - - if return_type == "pairs": - return [] - elif return_type == "request_data": - return req_data - else: - return request_class(url) - - def __str__(self): - value = self.value - if value is None: value = "<None>" - - infos = [] - if self.disabled: infos.append("disabled") - if self.readonly: infos.append("readonly") - info = ", ".join(infos) - if info: info = " (%s)" % info - - return "<%s(%s)%s>" % (self.__class__.__name__, value, info) - - -#--------------------------------------------------- -class IgnoreControl(ScalarControl): - """Control that we're not interested in. - - Covers: - - INPUT/RESET - BUTTON/RESET - INPUT/BUTTON - BUTTON/BUTTON - - These controls are always unsuccessful, in the terminology of HTML 4 (ie. - they never require any information to be returned to the server). - - BUTTON/BUTTON is used to generate events for script embedded in HTML. - - The value attribute of IgnoreControl is always None. - - """ - def __init__(self, type, name, attrs, index=None): - ScalarControl.__init__(self, type, name, attrs, index) - self._value = None - - def is_of_kind(self, kind): return False - - def __setattr__(self, name, value): - if name == "value": - raise AttributeError( - "control '%s' is ignored, hence read-only" % self.name) - elif name in ("name", "type"): - raise AttributeError("%s attribute is readonly" % name) - else: - self.__dict__[name] = value - - -#--------------------------------------------------- -# ListControls - -# helpers and subsidiary classes - -class Item: - def __init__(self, control, attrs, index=None): - label = _get_label(attrs) - self.__dict__.update({ - "name": attrs["value"], - "_labels": label and [label] or [], - "attrs": attrs, - "_control": control, - "disabled": attrs.has_key("disabled"), - "_selected": False, - "id": attrs.get("id"), - "_index": index, - }) - control.items.append(self) - - def get_labels(self): - """Return all labels (Label instances) for this item. - - For items that represent radio buttons or checkboxes, if the item was - surrounded by a <label> tag, that will be the first label; all other - labels, connected by 'for' and 'id', are in the order that appear in - the HTML. - - For items that represent select options, if the option had a label - attribute, that will be the first label. If the option has contents - (text within the option tags) and it is not the same as the label - attribute (if any), that will be a label. There is nothing in the - spec to my knowledge that makes an option with an id unable to be the - target of a label's for attribute, so those are included, if any, for - the sake of consistency and completeness. - - """ - res = [] - res.extend(self._labels) - if self.id: - res.extend(self._control._form._id_to_labels.get(self.id, ())) - return res - - def __getattr__(self, name): - if name=="selected": - return self._selected - raise AttributeError(name) - - def __setattr__(self, name, value): - if name == "selected": - self._control._set_selected_state(self, value) - elif name == "disabled": - self.__dict__["disabled"] = bool(value) - else: - raise AttributeError(name) - - def __str__(self): - res = self.name - if self.selected: - res = "*" + res - if self.disabled: - res = "(%s)" % res - return res - - def __repr__(self): - # XXX appending the attrs without distinguishing them from name and id - # is silly - attrs = [("name", self.name), ("id", self.id)]+self.attrs.items() - return "<%s %s>" % ( - self.__class__.__name__, - " ".join(["%s=%r" % (k, v) for k, v in attrs]) - ) - -def disambiguate(items, nr, **kwds): - msgs = [] - for key, value in kwds.items(): - msgs.append("%s=%r" % (key, value)) - msg = " ".join(msgs) - if not items: - raise ItemNotFoundError(msg) - if nr is None: - if len(items) > 1: - raise AmbiguityError(msg) - nr = 0 - if len(items) <= nr: - raise ItemNotFoundError(msg) - return items[nr] - -class ListControl(Control): - """Control representing a sequence of items. - - The value attribute of a ListControl represents the successful list items - in the control. The successful list items are those that are selected and - not disabled. - - ListControl implements both list controls that take a length-1 value - (single-selection) and those that take length >1 values - (multiple-selection). - - ListControls accept sequence values only. Some controls only accept - sequences of length 0 or 1 (RADIO, and single-selection SELECT). - In those cases, ItemCountError is raised if len(sequence) > 1. CHECKBOXes - and multiple-selection SELECTs (those having the "multiple" HTML attribute) - accept sequences of any length. - - Note the following mistake: - - control.value = some_value - assert control.value == some_value # not necessarily true - - The reason for this is that the value attribute always gives the list items - in the order they were listed in the HTML. - - ListControl items can also be referred to by their labels instead of names. - Use the label argument to .get(), and the .set_value_by_label(), - .get_value_by_label() methods. - - Note that, rather confusingly, though SELECT controls are represented in - HTML by SELECT elements (which contain OPTION elements, representing - individual list items), CHECKBOXes and RADIOs are not represented by *any* - element. Instead, those controls are represented by a collection of INPUT - elements. For example, this is a SELECT control, named "control1": - - <select name="control1"> - <option>foo</option> - <option value="1">bar</option> - </select> - - and this is a CHECKBOX control, named "control2": - - <input type="checkbox" name="control2" value="foo" id="cbe1"> - <input type="checkbox" name="control2" value="bar" id="cbe2"> - - The id attribute of a CHECKBOX or RADIO ListControl is always that of its - first element (for example, "cbe1" above). - - - Additional read-only public attribute: multiple. - - """ - - # ListControls are built up by the parser from their component items by - # creating one ListControl per item, consolidating them into a single - # master ListControl held by the HTMLForm: - - # -User calls form.new_control(...) - # -Form creates Control, and calls control.add_to_form(self). - # -Control looks for a Control with the same name and type in the form, - # and if it finds one, merges itself with that control by calling - # control.merge_control(self). The first Control added to the form, of - # a particular name and type, is the only one that survives in the - # form. - # -Form calls control.fixup for all its controls. ListControls in the - # form know they can now safely pick their default values. - - # To create a ListControl without an HTMLForm, use: - - # control.merge_control(new_control) - - # (actually, it's much easier just to use ParseFile) - - _label = None - - def __init__(self, type, name, attrs={}, select_default=False, - called_as_base_class=False, index=None): - """ - select_default: for RADIO and multiple-selection SELECT controls, pick - the first item as the default if no 'selected' HTML attribute is - present - - """ - if not called_as_base_class: - raise NotImplementedError() - - self.__dict__["type"] = type.lower() - self.__dict__["name"] = name - self._value = attrs.get("value") - self.disabled = False - self.readonly = False - self.id = attrs.get("id") - self._closed = False - - # As Controls are merged in with .merge_control(), self.attrs will - # refer to each Control in turn -- always the most recently merged - # control. Each merged-in Control instance corresponds to a single - # list item: see ListControl.__doc__. - self.items = [] - self._form = None - - self._select_default = select_default - self._clicked = False - - def clear(self): - self.value = [] - - def is_of_kind(self, kind): - if kind == "list": - return True - elif kind == "multilist": - return bool(self.multiple) - elif kind == "singlelist": - return not self.multiple - else: - return False - - def get_items(self, name=None, label=None, id=None, - exclude_disabled=False): - """Return matching items by name or label. - - For argument docs, see the docstring for .get() - - """ - if name is not None and not isstringlike(name): - raise TypeError("item name must be string-like") - if label is not None and not isstringlike(label): - raise TypeError("item label must be string-like") - if id is not None and not isstringlike(id): - raise TypeError("item id must be string-like") - items = [] # order is important - compat = self._form.backwards_compat - for o in self.items: - if exclude_disabled and o.disabled: - continue - if name is not None and o.name != name: - continue - if label is not None: - for l in o.get_labels(): - if ((compat and l.text == label) or - (not compat and l.text.find(label) > -1)): - break - else: - continue - if id is not None and o.id != id: - continue - items.append(o) - return items - - def get(self, name=None, label=None, id=None, nr=None, - exclude_disabled=False): - """Return item by name or label, disambiguating if necessary with nr. - - All arguments must be passed by name, with the exception of 'name', - which may be used as a positional argument. - - If name is specified, then the item must have the indicated name. - - If label is specified, then the item must have a label whose - whitespace-compressed, stripped, text substring-matches the indicated - label string (e.g. label="please choose" will match - " Do please choose an item "). - - If id is specified, then the item must have the indicated id. - - nr is an optional 0-based index of the items matching the query. - - If nr is the default None value and more than item is found, raises - AmbiguityError (unless the HTMLForm instance's backwards_compat - attribute is true). - - If no item is found, or if items are found but nr is specified and not - found, raises ItemNotFoundError. - - Optionally excludes disabled items. - - """ - if nr is None and self._form.backwards_compat: - nr = 0 # :-/ - items = self.get_items(name, label, id, exclude_disabled) - return disambiguate(items, nr, name=name, label=label, id=id) - - def _get(self, name, by_label=False, nr=None, exclude_disabled=False): - # strictly for use by deprecated methods - if by_label: - name, label = None, name - else: - name, label = name, None - return self.get(name, label, nr, exclude_disabled) - - def toggle(self, name, by_label=False, nr=None): - """Deprecated: given a name or label and optional disambiguating index - nr, toggle the matching item's selection. - - Selecting items follows the behavior described in the docstring of the - 'get' method. - - if the item is disabled, or this control is disabled or readonly, - raise AttributeError. - - """ - deprecation( - "item = control.get(...); item.selected = not item.selected") - o = self._get(name, by_label, nr) - self._set_selected_state(o, not o.selected) - - def set(self, selected, name, by_label=False, nr=None): - """Deprecated: given a name or label and optional disambiguating index - nr, set the matching item's selection to the bool value of selected. - - Selecting items follows the behavior described in the docstring of the - 'get' method. - - if the item is disabled, or this control is disabled or readonly, - raise AttributeError. - - """ - deprecation( - "control.get(...).selected = <boolean>") - self._set_selected_state(self._get(name, by_label, nr), selected) - - def _set_selected_state(self, item, action): - # action: - # bool False: off - # bool True: on - if self.disabled: - raise AttributeError("control '%s' is disabled" % self.name) - if self.readonly: - raise AttributeError("control '%s' is readonly" % self.name) - action == bool(action) - compat = self._form.backwards_compat - if not compat and item.disabled: - raise AttributeError("item is disabled") - else: - if compat and item.disabled and action: - raise AttributeError("item is disabled") - if self.multiple: - item.__dict__["_selected"] = action - else: - if not action: - item.__dict__["_selected"] = False - else: - for o in self.items: - o.__dict__["_selected"] = False - item.__dict__["_selected"] = True - - def toggle_single(self, by_label=None): - """Deprecated: toggle the selection of the single item in this control. - - Raises ItemCountError if the control does not contain only one item. - - by_label argument is ignored, and included only for backwards - compatibility. - - """ - deprecation( - "control.items[0].selected = not control.items[0].selected") - if len(self.items) != 1: - raise ItemCountError( - "'%s' is not a single-item control" % self.name) - item = self.items[0] - self._set_selected_state(item, not item.selected) - - def set_single(self, selected, by_label=None): - """Deprecated: set the selection of the single item in this control. - - Raises ItemCountError if the control does not contain only one item. - - by_label argument is ignored, and included only for backwards - compatibility. - - """ - deprecation( - "control.items[0].selected = <boolean>") - if len(self.items) != 1: - raise ItemCountError( - "'%s' is not a single-item control" % self.name) - self._set_selected_state(self.items[0], selected) - - def get_item_disabled(self, name, by_label=False, nr=None): - """Get disabled state of named list item in a ListControl.""" - deprecation( - "control.get(...).disabled") - return self._get(name, by_label, nr).disabled - - def set_item_disabled(self, disabled, name, by_label=False, nr=None): - """Set disabled state of named list item in a ListControl. - - disabled: boolean disabled state - - """ - deprecation( - "control.get(...).disabled = <boolean>") - self._get(name, by_label, nr).disabled = disabled - - def set_all_items_disabled(self, disabled): - """Set disabled state of all list items in a ListControl. - - disabled: boolean disabled state - - """ - for o in self.items: - o.disabled = disabled - - def get_item_attrs(self, name, by_label=False, nr=None): - """Return dictionary of HTML attributes for a single ListControl item. - - The HTML element types that describe list items are: OPTION for SELECT - controls, INPUT for the rest. These elements have HTML attributes that - you may occasionally want to know about -- for example, the "alt" HTML - attribute gives a text string describing the item (graphical browsers - usually display this as a tooltip). - - The returned dictionary maps HTML attribute names to values. The names - and values are taken from the original HTML. - - """ - deprecation( - "control.get(...).attrs") - return self._get(name, by_label, nr).attrs - - def close_control(self): - self._closed = True - - def add_to_form(self, form): - assert self._form is None or form == self._form, ( - "can't add control to more than one form") - self._form = form - if self.name is None: - # always count nameless elements as separate controls - Control.add_to_form(self, form) - else: - for ii in range(len(form.controls)-1, -1, -1): - control = form.controls[ii] - if control.name == self.name and control.type == self.type: - if control._closed: - Control.add_to_form(self, form) - else: - control.merge_control(self) - break - else: - Control.add_to_form(self, form) - - def merge_control(self, control): - assert bool(control.multiple) == bool(self.multiple) - # usually, isinstance(control, self.__class__) - self.items.extend(control.items) - - def fixup(self): - """ - ListControls are built up from component list items (which are also - ListControls) during parsing. This method should be called after all - items have been added. See ListControl.__doc__ for the reason this is - required. - - """ - # Need to set default selection where no item was indicated as being - # selected by the HTML: - - # CHECKBOX: - # Nothing should be selected. - # SELECT/single, SELECT/multiple and RADIO: - # RFC 1866 (HTML 2.0): says first item should be selected. - # W3C HTML 4.01 Specification: says that client behaviour is - # undefined in this case. For RADIO, exactly one must be selected, - # though which one is undefined. - # Both Netscape and Microsoft Internet Explorer (IE) choose first - # item for SELECT/single. However, both IE5 and Mozilla (both 1.0 - # and Firebird 0.6) leave all items unselected for RADIO and - # SELECT/multiple. - - # Since both Netscape and IE all choose the first item for - # SELECT/single, we do the same. OTOH, both Netscape and IE - # leave SELECT/multiple with nothing selected, in violation of RFC 1866 - # (but not in violation of the W3C HTML 4 standard); the same is true - # of RADIO (which *is* in violation of the HTML 4 standard). We follow - # RFC 1866 if the _select_default attribute is set, and Netscape and IE - # otherwise. RFC 1866 and HTML 4 are always violated insofar as you - # can deselect all items in a RadioControl. - - for o in self.items: - # set items' controls to self, now that we've merged - o.__dict__["_control"] = self - - def __getattr__(self, name): - if name == "value": - compat = self._form.backwards_compat - if self.name is None: - return [] - return [o.name for o in self.items if o.selected and - (not o.disabled or compat)] - else: - raise AttributeError("%s instance has no attribute '%s'" % - (self.__class__.__name__, name)) - - def __setattr__(self, name, value): - if name == "value": - if self.disabled: - raise AttributeError("control '%s' is disabled" % self.name) - if self.readonly: - raise AttributeError("control '%s' is readonly" % self.name) - self._set_value(value) - elif name in ("name", "type", "multiple"): - raise AttributeError("%s attribute is readonly" % name) - else: - self.__dict__[name] = value - - def _set_value(self, value): - if value is None or isstringlike(value): - raise TypeError("ListControl, must set a sequence") - if not value: - compat = self._form.backwards_compat - for o in self.items: - if not o.disabled or compat: - o.selected = False - elif self.multiple: - self._multiple_set_value(value) - elif len(value) > 1: - raise ItemCountError( - "single selection list, must set sequence of " - "length 0 or 1") - else: - self._single_set_value(value) - - def _get_items(self, name, target=1): - all_items = self.get_items(name) - items = [o for o in all_items if not o.disabled] - if len(items) < target: - if len(all_items) < target: - raise ItemNotFoundError( - "insufficient items with name %r" % name) - else: - raise AttributeError( - "insufficient non-disabled items with name %s" % name) - on = [] - off = [] - for o in items: - if o.selected: - on.append(o) - else: - off.append(o) - return on, off - - def _single_set_value(self, value): - assert len(value) == 1 - on, off = self._get_items(value[0]) - assert len(on) <= 1 - if not on: - off[0].selected = True - - def _multiple_set_value(self, value): - compat = self._form.backwards_compat - turn_on = [] # transactional-ish - turn_off = [item for item in self.items if - item.selected and (not item.disabled or compat)] - names = {} - for nn in value: - if nn in names.keys(): - names[nn] += 1 - else: - names[nn] = 1 - for name, count in names.items(): - on, off = self._get_items(name, count) - for i in range(count): - if on: - item = on[0] - del on[0] - del turn_off[turn_off.index(item)] - else: - item = off[0] - del off[0] - turn_on.append(item) - for item in turn_off: - item.selected = False - for item in turn_on: - item.selected = True - - def set_value_by_label(self, value): - """Set the value of control by item labels. - - value is expected to be an iterable of strings that are substrings of - the item labels that should be selected. Before substring matching is - performed, the original label text is whitespace-compressed - (consecutive whitespace characters are converted to a single space - character) and leading and trailing whitespace is stripped. Ambiguous - labels are accepted without complaint if the form's backwards_compat is - True; otherwise, it will not complain as long as all ambiguous labels - share the same item name (e.g. OPTION value). - - """ - if isstringlike(value): - raise TypeError(value) - if not self.multiple and len(value) > 1: - raise ItemCountError( - "single selection list, must set sequence of " - "length 0 or 1") - items = [] - for nn in value: - found = self.get_items(label=nn) - if len(found) > 1: - if not self._form.backwards_compat: - # ambiguous labels are fine as long as item names (e.g. - # OPTION values) are same - opt_name = found[0].name - if [o for o in found[1:] if o.name != opt_name]: - raise AmbiguityError(nn) - else: - # OK, we'll guess :-( Assume first available item. - found = found[:1] - for o in found: - # For the multiple-item case, we could try to be smarter, - # saving them up and trying to resolve, but that's too much. - if self._form.backwards_compat or o not in items: - items.append(o) - break - else: # all of them are used - raise ItemNotFoundError(nn) - # now we have all the items that should be on - # let's just turn everything off and then back on. - self.value = [] - for o in items: - o.selected = True - - def get_value_by_label(self): - """Return the value of the control as given by normalized labels.""" - res = [] - compat = self._form.backwards_compat - for o in self.items: - if (not o.disabled or compat) and o.selected: - for l in o.get_labels(): - if l.text: - res.append(l.text) - break - else: - res.append(None) - return res - - def possible_items(self, by_label=False): - """Deprecated: return the names or labels of all possible items. - - Includes disabled items, which may be misleading for some use cases. - - """ - deprecation( - "[item.name for item in self.items]") - if by_label: - res = [] - for o in self.items: - for l in o.get_labels(): - if l.text: - res.append(l.text) - break - else: - res.append(None) - return res - return [o.name for o in self.items] - - def _totally_ordered_pairs(self): - if self.disabled or self.name is None: - return [] - else: - return [(o._index, self.name, o.name) for o in self.items - if o.selected and not o.disabled] - - def __str__(self): - name = self.name - if name is None: name = "<None>" - - display = [str(o) for o in self.items] - - infos = [] - if self.disabled: infos.append("disabled") - if self.readonly: infos.append("readonly") - info = ", ".join(infos) - if info: info = " (%s)" % info - - return "<%s(%s=[%s])%s>" % (self.__class__.__name__, - name, ", ".join(display), info) - - -class RadioControl(ListControl): - """ - Covers: - - INPUT/RADIO - - """ - def __init__(self, type, name, attrs, select_default=False, index=None): - attrs.setdefault("value", "on") - ListControl.__init__(self, type, name, attrs, select_default, - called_as_base_class=True, index=index) - self.__dict__["multiple"] = False - o = Item(self, attrs, index) - o.__dict__["_selected"] = attrs.has_key("checked") - - def fixup(self): - ListControl.fixup(self) - found = [o for o in self.items if o.selected and not o.disabled] - if not found: - if self._select_default: - for o in self.items: - if not o.disabled: - o.selected = True - break - else: - # Ensure only one item selected. Choose the last one, - # following IE and Firefox. - for o in found[:-1]: - o.selected = False - - def get_labels(self): - return [] - -class CheckboxControl(ListControl): - """ - Covers: - - INPUT/CHECKBOX - - """ - def __init__(self, type, name, attrs, select_default=False, index=None): - attrs.setdefault("value", "on") - ListControl.__init__(self, type, name, attrs, select_default, - called_as_base_class=True, index=index) - self.__dict__["multiple"] = True - o = Item(self, attrs, index) - o.__dict__["_selected"] = attrs.has_key("checked") - - def get_labels(self): - return [] - - -class SelectControl(ListControl): - """ - Covers: - - SELECT (and OPTION) - - - OPTION 'values', in HTML parlance, are Item 'names' in mechanize parlance. - - SELECT control values and labels are subject to some messy defaulting - rules. For example, if the HTML representation of the control is: - - <SELECT name=year> - <OPTION value=0 label="2002">current year</OPTION> - <OPTION value=1>2001</OPTION> - <OPTION>2000</OPTION> - </SELECT> - - The items, in order, have labels "2002", "2001" and "2000", whereas their - names (the OPTION values) are "0", "1" and "2000" respectively. Note that - the value of the last OPTION in this example defaults to its contents, as - specified by RFC 1866, as do the labels of the second and third OPTIONs. - - The OPTION labels are sometimes more meaningful than the OPTION values, - which can make for more maintainable code. - - Additional read-only public attribute: attrs - - The attrs attribute is a dictionary of the original HTML attributes of the - SELECT element. Other ListControls do not have this attribute, because in - other cases the control as a whole does not correspond to any single HTML - element. control.get(...).attrs may be used as usual to get at the HTML - attributes of the HTML elements corresponding to individual list items (for - SELECT controls, these are OPTION elements). - - Another special case is that the Item.attrs dictionaries have a special key - "contents" which does not correspond to any real HTML attribute, but rather - contains the contents of the OPTION element: - - <OPTION>this bit</OPTION> - - """ - # HTML attributes here are treated slightly differently from other list - # controls: - # -The SELECT HTML attributes dictionary is stuffed into the OPTION - # HTML attributes dictionary under the "__select" key. - # -The content of each OPTION element is stored under the special - # "contents" key of the dictionary. - # After all this, the dictionary is passed to the SelectControl constructor - # as the attrs argument, as usual. However: - # -The first SelectControl constructed when building up a SELECT control - # has a constructor attrs argument containing only the __select key -- so - # this SelectControl represents an empty SELECT control. - # -Subsequent SelectControls have both OPTION HTML-attribute in attrs and - # the __select dictionary containing the SELECT HTML-attributes. - - def __init__(self, type, name, attrs, select_default=False, index=None): - # fish out the SELECT HTML attributes from the OPTION HTML attributes - # dictionary - self.attrs = attrs["__select"].copy() - self.__dict__["_label"] = _get_label(self.attrs) - self.__dict__["id"] = self.attrs.get("id") - self.__dict__["multiple"] = self.attrs.has_key("multiple") - # the majority of the contents, label, and value dance already happened - contents = attrs.get("contents") - attrs = attrs.copy() - del attrs["__select"] - - ListControl.__init__(self, type, name, self.attrs, select_default, - called_as_base_class=True, index=index) - self.disabled = self.attrs.has_key("disabled") - self.readonly = self.attrs.has_key("readonly") - if attrs.has_key("value"): - # otherwise it is a marker 'select started' token - o = Item(self, attrs, index) - o.__dict__["_selected"] = attrs.has_key("selected") - # add 'label' label and contents label, if different. If both are - # provided, the 'label' label is used for display in HTML - # 4.0-compliant browsers (and any lower spec? not sure) while the - # contents are used for display in older or less-compliant - # browsers. We make label objects for both, if the values are - # different. - label = attrs.get("label") - if label: - o._labels.append(Label({"__text": label})) - if contents and contents != label: - o._labels.append(Label({"__text": contents})) - elif contents: - o._labels.append(Label({"__text": contents})) - - def fixup(self): - ListControl.fixup(self) - # Firefox doesn't exclude disabled items from those considered here - # (i.e. from 'found', for both branches of the if below). Note that - # IE6 doesn't support the disabled attribute on OPTIONs at all. - found = [o for o in self.items if o.selected] - if not found: - if not self.multiple or self._select_default: - for o in self.items: - if not o.disabled: - was_disabled = self.disabled - self.disabled = False - try: - o.selected = True - finally: - o.disabled = was_disabled - break - elif not self.multiple: - # Ensure only one item selected. Choose the last one, - # following IE and Firefox. - for o in found[:-1]: - o.selected = False - - -#--------------------------------------------------- -class SubmitControl(ScalarControl): - """ - Covers: - - INPUT/SUBMIT - BUTTON/SUBMIT - - """ - def __init__(self, type, name, attrs, index=None): - ScalarControl.__init__(self, type, name, attrs, index) - # IE5 defaults SUBMIT value to "Submit Query"; Firebird 0.6 leaves it - # blank, Konqueror 3.1 defaults to "Submit". HTML spec. doesn't seem - # to define this. - if self.value is None: self.value = "" - self.readonly = True - - def get_labels(self): - res = [] - if self.value: - res.append(Label({"__text": self.value})) - res.extend(ScalarControl.get_labels(self)) - return res - - def is_of_kind(self, kind): return kind == "clickable" - - def _click(self, form, coord, return_type, request_class=_request.Request): - self._clicked = coord - r = form._switch_click(return_type, request_class) - self._clicked = False - return r - - def _totally_ordered_pairs(self): - if not self._clicked: - return [] - return ScalarControl._totally_ordered_pairs(self) - - -#--------------------------------------------------- -class ImageControl(SubmitControl): - """ - Covers: - - INPUT/IMAGE - - Coordinates are specified using one of the HTMLForm.click* methods. - - """ - def __init__(self, type, name, attrs, index=None): - SubmitControl.__init__(self, type, name, attrs, index) - self.readonly = False - - def _totally_ordered_pairs(self): - clicked = self._clicked - if self.disabled or not clicked: - return [] - name = self.name - if name is None: return [] - pairs = [ - (self._index, "%s.x" % name, str(clicked[0])), - (self._index+1, "%s.y" % name, str(clicked[1])), - ] - value = self._value - if value: - pairs.append((self._index+2, name, value)) - return pairs - - get_labels = ScalarControl.get_labels - -# aliases, just to make str(control) and str(form) clearer -class PasswordControl(TextControl): pass -class HiddenControl(TextControl): pass -class TextareaControl(TextControl): pass -class SubmitButtonControl(SubmitControl): pass - - -def is_listcontrol(control): return control.is_of_kind("list") - - -class HTMLForm: - """Represents a single HTML <form> ... </form> element. - - A form consists of a sequence of controls that usually have names, and - which can take on various values. The values of the various types of - controls represent variously: text, zero-or-one-of-many or many-of-many - choices, and files to be uploaded. Some controls can be clicked on to - submit the form, and clickable controls' values sometimes include the - coordinates of the click. - - Forms can be filled in with data to be returned to the server, and then - submitted, using the click method to generate a request object suitable for - passing to mechanize.urlopen (or the click_request_data or click_pairs - methods for integration with third-party code). - - import mechanize - forms = mechanize.ParseFile(html, base_uri) - form = forms[0] - - form["query"] = "Python" - form.find_control("nr_results").get("lots").selected = True - - response = mechanize.urlopen(form.click()) - - Usually, HTMLForm instances are not created directly. Instead, the - ParseFile or ParseResponse factory functions are used. If you do construct - HTMLForm objects yourself, however, note that an HTMLForm instance is only - properly initialised after the fixup method has been called (ParseFile and - ParseResponse do this for you). See ListControl.__doc__ for the reason - this is required. - - Indexing a form (form["control_name"]) returns the named Control's value - attribute. Assignment to a form index (form["control_name"] = something) - is equivalent to assignment to the named Control's value attribute. If you - need to be more specific than just supplying the control's name, use the - set_value and get_value methods. - - ListControl values are lists of item names (specifically, the names of the - items that are selected and not disabled, and hence are "successful" -- ie. - cause data to be returned to the server). The list item's name is the - value of the corresponding HTML element's"value" attribute. - - Example: - - <INPUT type="CHECKBOX" name="cheeses" value="leicester"></INPUT> - <INPUT type="CHECKBOX" name="cheeses" value="cheddar"></INPUT> - - defines a CHECKBOX control with name "cheeses" which has two items, named - "leicester" and "cheddar". - - Another example: - - <SELECT name="more_cheeses"> - <OPTION>1</OPTION> - <OPTION value="2" label="CHEDDAR">cheddar</OPTION> - </SELECT> - - defines a SELECT control with name "more_cheeses" which has two items, - named "1" and "2" (because the OPTION element's value HTML attribute - defaults to the element contents -- see SelectControl.__doc__ for more on - these defaulting rules). - - To select, deselect or otherwise manipulate individual list items, use the - HTMLForm.find_control() and ListControl.get() methods. To set the whole - value, do as for any other control: use indexing or the set_/get_value - methods. - - Example: - - # select *only* the item named "cheddar" - form["cheeses"] = ["cheddar"] - # select "cheddar", leave other items unaffected - form.find_control("cheeses").get("cheddar").selected = True - - Some controls (RADIO and SELECT without the multiple attribute) can only - have zero or one items selected at a time. Some controls (CHECKBOX and - SELECT with the multiple attribute) can have multiple items selected at a - time. To set the whole value of a ListControl, assign a sequence to a form - index: - - form["cheeses"] = ["cheddar", "leicester"] - - If the ListControl is not multiple-selection, the assigned list must be of - length one. - - To check if a control has an item, if an item is selected, or if an item is - successful (selected and not disabled), respectively: - - "cheddar" in [item.name for item in form.find_control("cheeses").items] - "cheddar" in [item.name for item in form.find_control("cheeses").items and - item.selected] - "cheddar" in form["cheeses"] # (or "cheddar" in form.get_value("cheeses")) - - Note that some list items may be disabled (see below). - - Note the following mistake: - - form[control_name] = control_value - assert form[control_name] == control_value # not necessarily true - - The reason for this is that form[control_name] always gives the list items - in the order they were listed in the HTML. - - List items (hence list values, too) can be referred to in terms of list - item labels rather than list item names using the appropriate label - arguments. Note that each item may have several labels. - - The question of default values of OPTION contents, labels and values is - somewhat complicated: see SelectControl.__doc__ and - ListControl.get_item_attrs.__doc__ if you think you need to know. - - Controls can be disabled or readonly. In either case, the control's value - cannot be changed until you clear those flags (see example below). - Disabled is the state typically represented by browsers by 'greying out' a - control. Disabled controls are not 'successful' -- they don't cause data - to get returned to the server. Readonly controls usually appear in - browsers as read-only text boxes. Readonly controls are successful. List - items can also be disabled. Attempts to select or deselect disabled items - fail with AttributeError. - - If a lot of controls are readonly, it can be useful to do this: - - form.set_all_readonly(False) - - To clear a control's value attribute, so that it is not successful (until a - value is subsequently set): - - form.clear("cheeses") - - More examples: - - control = form.find_control("cheeses") - control.disabled = False - control.readonly = False - control.get("gruyere").disabled = True - control.items[0].selected = True - - See the various Control classes for further documentation. Many methods - take name, type, kind, id, label and nr arguments to specify the control to - be operated on: see HTMLForm.find_control.__doc__. - - ControlNotFoundError (subclass of ValueError) is raised if the specified - control can't be found. This includes occasions where a non-ListControl - is found, but the method (set, for example) requires a ListControl. - ItemNotFoundError (subclass of ValueError) is raised if a list item can't - be found. ItemCountError (subclass of ValueError) is raised if an attempt - is made to select more than one item and the control doesn't allow that, or - set/get_single are called and the control contains more than one item. - AttributeError is raised if a control or item is readonly or disabled and - an attempt is made to alter its value. - - Security note: Remember that any passwords you store in HTMLForm instances - will be saved to disk in the clear if you pickle them (directly or - indirectly). The simplest solution to this is to avoid pickling HTMLForm - objects. You could also pickle before filling in any password, or just set - the password to "" before pickling. - - - Public attributes: - - action: full (absolute URI) form action - method: "GET" or "POST" - enctype: form transfer encoding MIME type - name: name of form (None if no name was specified) - attrs: dictionary mapping original HTML form attributes to their values - - controls: list of Control instances; do not alter this list - (instead, call form.new_control to make a Control and add it to the - form, or control.add_to_form if you already have a Control instance) - - - - Methods for form filling: - ------------------------- - - Most of the these methods have very similar arguments. See - HTMLForm.find_control.__doc__ for details of the name, type, kind, label - and nr arguments. - - def find_control(self, - name=None, type=None, kind=None, id=None, predicate=None, - nr=None, label=None) - - get_value(name=None, type=None, kind=None, id=None, nr=None, - by_label=False, # by_label is deprecated - label=None) - set_value(value, - name=None, type=None, kind=None, id=None, nr=None, - by_label=False, # by_label is deprecated - label=None) - - clear_all() - clear(name=None, type=None, kind=None, id=None, nr=None, label=None) - - set_all_readonly(readonly) - - - Method applying only to FileControls: - - add_file(file_object, - content_type="application/octet-stream", filename=None, - name=None, id=None, nr=None, label=None) - - - Methods applying only to clickable controls: - - click(name=None, type=None, id=None, nr=0, coord=(1,1), label=None) - click_request_data(name=None, type=None, id=None, nr=0, coord=(1,1), - label=None) - click_pairs(name=None, type=None, id=None, nr=0, coord=(1,1), label=None) - - """ - - type2class = { - "text": TextControl, - "password": PasswordControl, - "hidden": HiddenControl, - "textarea": TextareaControl, - - "isindex": IsindexControl, - - "file": FileControl, - - "button": IgnoreControl, - "buttonbutton": IgnoreControl, - "reset": IgnoreControl, - "resetbutton": IgnoreControl, - - "submit": SubmitControl, - "submitbutton": SubmitButtonControl, - "image": ImageControl, - - "radio": RadioControl, - "checkbox": CheckboxControl, - "select": SelectControl, - } - -#--------------------------------------------------- -# Initialisation. Use ParseResponse / ParseFile instead. - - def __init__(self, action, method="GET", - enctype="application/x-www-form-urlencoded", - name=None, attrs=None, - request_class=_request.Request, - forms=None, labels=None, id_to_labels=None, - backwards_compat=True): - """ - In the usual case, use ParseResponse (or ParseFile) to create new - HTMLForm objects. - - action: full (absolute URI) form action - method: "GET" or "POST" - enctype: form transfer encoding MIME type - name: name of form - attrs: dictionary mapping original HTML form attributes to their values - - """ - self.action = action - self.method = method - self.enctype = enctype - self.name = name - if attrs is not None: - self.attrs = attrs.copy() - else: - self.attrs = {} - self.controls = [] - self._request_class = request_class - - # these attributes are used by zope.testbrowser - self._forms = forms # this is a semi-public API! - self._labels = labels # this is a semi-public API! - self._id_to_labels = id_to_labels # this is a semi-public API! - - self.backwards_compat = backwards_compat # note __setattr__ - - self._urlunparse = urlparse.urlunparse - self._urlparse = urlparse.urlparse - - def __getattr__(self, name): - if name == "backwards_compat": - return self._backwards_compat - return getattr(HTMLForm, name) - - def __setattr__(self, name, value): - # yuck - if name == "backwards_compat": - name = "_backwards_compat" - value = bool(value) - for cc in self.controls: - try: - items = cc.items - except AttributeError: - continue - else: - for ii in items: - for ll in ii.get_labels(): - ll._backwards_compat = value - self.__dict__[name] = value - - def new_control(self, type, name, attrs, - ignore_unknown=False, select_default=False, index=None): - """Adds a new control to the form. - - This is usually called by ParseFile and ParseResponse. Don't call it - youself unless you're building your own Control instances. - - Note that controls representing lists of items are built up from - controls holding only a single list item. See ListControl.__doc__ for - further information. - - type: type of control (see Control.__doc__ for a list) - attrs: HTML attributes of control - ignore_unknown: if true, use a dummy Control instance for controls of - unknown type; otherwise, use a TextControl - select_default: for RADIO and multiple-selection SELECT controls, pick - the first item as the default if no 'selected' HTML attribute is - present (this defaulting happens when the HTMLForm.fixup method is - called) - index: index of corresponding element in HTML (see - MoreFormTests.test_interspersed_controls for motivation) - - """ - type = type.lower() - klass = self.type2class.get(type) - if klass is None: - if ignore_unknown: - klass = IgnoreControl - else: - klass = TextControl - - a = attrs.copy() - if issubclass(klass, ListControl): - control = klass(type, name, a, select_default, index) - else: - control = klass(type, name, a, index) - - if type == "select" and len(attrs) == 1: - for ii in range(len(self.controls)-1, -1, -1): - ctl = self.controls[ii] - if ctl.type == "select": - ctl.close_control() - break - - control.add_to_form(self) - control._urlparse = self._urlparse - control._urlunparse = self._urlunparse - - def fixup(self): - """Normalise form after all controls have been added. - - This is usually called by ParseFile and ParseResponse. Don't call it - youself unless you're building your own Control instances. - - This method should only be called once, after all controls have been - added to the form. - - """ - for control in self.controls: - control.fixup() - self.backwards_compat = self._backwards_compat - -#--------------------------------------------------- - def __str__(self): - header = "%s%s %s %s" % ( - (self.name and self.name+" " or ""), - self.method, self.action, self.enctype) - rep = [header] - for control in self.controls: - rep.append(" %s" % str(control)) - return "<%s>" % "\n".join(rep) - -#--------------------------------------------------- -# Form-filling methods. - - def __getitem__(self, name): - return self.find_control(name).value - def __contains__(self, name): - return bool(self.find_control(name)) - def __setitem__(self, name, value): - control = self.find_control(name) - try: - control.value = value - except AttributeError, e: - raise ValueError(str(e)) - - def get_value(self, - name=None, type=None, kind=None, id=None, nr=None, - by_label=False, # by_label is deprecated - label=None): - """Return value of control. - - If only name and value arguments are supplied, equivalent to - - form[name] - - """ - if by_label: - deprecation("form.get_value_by_label(...)") - c = self.find_control(name, type, kind, id, label=label, nr=nr) - if by_label: - try: - meth = c.get_value_by_label - except AttributeError: - raise NotImplementedError( - "control '%s' does not yet support by_label" % c.name) - else: - return meth() - else: - return c.value - def set_value(self, value, - name=None, type=None, kind=None, id=None, nr=None, - by_label=False, # by_label is deprecated - label=None): - """Set value of control. - - If only name and value arguments are supplied, equivalent to - - form[name] = value - - """ - if by_label: - deprecation("form.get_value_by_label(...)") - c = self.find_control(name, type, kind, id, label=label, nr=nr) - if by_label: - try: - meth = c.set_value_by_label - except AttributeError: - raise NotImplementedError( - "control '%s' does not yet support by_label" % c.name) - else: - meth(value) - else: - c.value = value - def get_value_by_label( - self, name=None, type=None, kind=None, id=None, label=None, nr=None): - """ - - All arguments should be passed by name. - - """ - c = self.find_control(name, type, kind, id, label=label, nr=nr) - return c.get_value_by_label() - - def set_value_by_label( - self, value, - name=None, type=None, kind=None, id=None, label=None, nr=None): - """ - - All arguments should be passed by name. - - """ - c = self.find_control(name, type, kind, id, label=label, nr=nr) - c.set_value_by_label(value) - - def set_all_readonly(self, readonly): - for control in self.controls: - control.readonly = bool(readonly) - - def clear_all(self): - """Clear the value attributes of all controls in the form. - - See HTMLForm.clear.__doc__. - - """ - for control in self.controls: - control.clear() - - def clear(self, - name=None, type=None, kind=None, id=None, nr=None, label=None): - """Clear the value attribute of a control. - - As a result, the affected control will not be successful until a value - is subsequently set. AttributeError is raised on readonly controls. - - """ - c = self.find_control(name, type, kind, id, label=label, nr=nr) - c.clear() - - -#--------------------------------------------------- -# Form-filling methods applying only to ListControls. - - def possible_items(self, # deprecated - name=None, type=None, kind=None, id=None, - nr=None, by_label=False, label=None): - """Return a list of all values that the specified control can take.""" - c = self._find_list_control(name, type, kind, id, label, nr) - return c.possible_items(by_label) - - def set(self, selected, item_name, # deprecated - name=None, type=None, kind=None, id=None, nr=None, - by_label=False, label=None): - """Select / deselect named list item. - - selected: boolean selected state - - """ - self._find_list_control(name, type, kind, id, label, nr).set( - selected, item_name, by_label) - def toggle(self, item_name, # deprecated - name=None, type=None, kind=None, id=None, nr=None, - by_label=False, label=None): - """Toggle selected state of named list item.""" - self._find_list_control(name, type, kind, id, label, nr).toggle( - item_name, by_label) - - def set_single(self, selected, # deprecated - name=None, type=None, kind=None, id=None, - nr=None, by_label=None, label=None): - """Select / deselect list item in a control having only one item. - - If the control has multiple list items, ItemCountError is raised. - - This is just a convenience method, so you don't need to know the item's - name -- the item name in these single-item controls is usually - something meaningless like "1" or "on". - - For example, if a checkbox has a single item named "on", the following - two calls are equivalent: - - control.toggle("on") - control.toggle_single() - - """ # by_label ignored and deprecated - self._find_list_control( - name, type, kind, id, label, nr).set_single(selected) - def toggle_single(self, name=None, type=None, kind=None, id=None, - nr=None, by_label=None, label=None): # deprecated - """Toggle selected state of list item in control having only one item. - - The rest is as for HTMLForm.set_single.__doc__. - - """ # by_label ignored and deprecated - self._find_list_control(name, type, kind, id, label, nr).toggle_single() - -#--------------------------------------------------- -# Form-filling method applying only to FileControls. - - def add_file(self, file_object, content_type=None, filename=None, - name=None, id=None, nr=None, label=None): - """Add a file to be uploaded. - - file_object: file-like object (with read method) from which to read - data to upload - content_type: MIME content type of data to upload - filename: filename to pass to server - - If filename is None, no filename is sent to the server. - - If content_type is None, the content type is guessed based on the - filename and the data from read from the file object. - - XXX - At the moment, guessed content type is always application/octet-stream. - Use sndhdr, imghdr modules. Should also try to guess HTML, XML, and - plain text. - - Note the following useful HTML attributes of file upload controls (see - HTML 4.01 spec, section 17): - - accept: comma-separated list of content types that the server will - handle correctly; you can use this to filter out non-conforming files - size: XXX IIRC, this is indicative of whether form wants multiple or - single files - maxlength: XXX hint of max content length in bytes? - - """ - self.find_control(name, "file", id=id, label=label, nr=nr).add_file( - file_object, content_type, filename) - -#--------------------------------------------------- -# Form submission methods, applying only to clickable controls. - - def click(self, name=None, type=None, id=None, nr=0, coord=(1,1), - request_class=_request.Request, - label=None): - """Return request that would result from clicking on a control. - - The request object is a mechanize.Request instance, which you can pass - to mechanize.urlopen. - - Only some control types (INPUT/SUBMIT & BUTTON/SUBMIT buttons and - IMAGEs) can be clicked. - - Will click on the first clickable control, subject to the name, type - and nr arguments (as for find_control). If no name, type, id or number - is specified and there are no clickable controls, a request will be - returned for the form in its current, un-clicked, state. - - IndexError is raised if any of name, type, id or nr is specified but no - matching control is found. ValueError is raised if the HTMLForm has an - enctype attribute that is not recognised. - - You can optionally specify a coordinate to click at, which only makes a - difference if you clicked on an image. - - """ - return self._click(name, type, id, label, nr, coord, "request", - self._request_class) - - def click_request_data(self, - name=None, type=None, id=None, - nr=0, coord=(1,1), - request_class=_request.Request, - label=None): - """As for click method, but return a tuple (url, data, headers). - - You can use this data to send a request to the server. This is useful - if you're using httplib or urllib rather than mechanize. Otherwise, - use the click method. - - # Untested. Have to subclass to add headers, I think -- so use - # mechanize instead! - import urllib - url, data, hdrs = form.click_request_data() - r = urllib.urlopen(url, data) - - # Untested. I don't know of any reason to use httplib -- you can get - # just as much control with mechanize. - import httplib, urlparse - url, data, hdrs = form.click_request_data() - tup = urlparse(url) - host, path = tup[1], urlparse.urlunparse((None, None)+tup[2:]) - conn = httplib.HTTPConnection(host) - if data: - httplib.request("POST", path, data, hdrs) - else: - httplib.request("GET", path, headers=hdrs) - r = conn.getresponse() - - """ - return self._click(name, type, id, label, nr, coord, "request_data", - self._request_class) - - def click_pairs(self, name=None, type=None, id=None, - nr=0, coord=(1,1), - label=None): - """As for click_request_data, but returns a list of (key, value) pairs. - - You can use this list as an argument to urllib.urlencode. This is - usually only useful if you're using httplib or urllib rather than - mechanize. It may also be useful if you want to manually tweak the - keys and/or values, but this should not be necessary. Otherwise, use - the click method. - - Note that this method is only useful for forms of MIME type - x-www-form-urlencoded. In particular, it does not return the - information required for file upload. If you need file upload and are - not using mechanize, use click_request_data. - """ - return self._click(name, type, id, label, nr, coord, "pairs", - self._request_class) - -#--------------------------------------------------- - - def find_control(self, - name=None, type=None, kind=None, id=None, - predicate=None, nr=None, - label=None): - """Locate and return some specific control within the form. - - At least one of the name, type, kind, predicate and nr arguments must - be supplied. If no matching control is found, ControlNotFoundError is - raised. - - If name is specified, then the control must have the indicated name. - - If type is specified then the control must have the specified type (in - addition to the types possible for <input> HTML tags: "text", - "password", "hidden", "submit", "image", "button", "radio", "checkbox", - "file" we also have "reset", "buttonbutton", "submitbutton", - "resetbutton", "textarea", "select" and "isindex"). - - If kind is specified, then the control must fall into the specified - group, each of which satisfies a particular interface. The types are - "text", "list", "multilist", "singlelist", "clickable" and "file". - - If id is specified, then the control must have the indicated id. - - If predicate is specified, then the control must match that function. - The predicate function is passed the control as its single argument, - and should return a boolean value indicating whether the control - matched. - - nr, if supplied, is the sequence number of the control (where 0 is the - first). Note that control 0 is the first control matching all the - other arguments (if supplied); it is not necessarily the first control - in the form. If no nr is supplied, AmbiguityError is raised if - multiple controls match the other arguments (unless the - .backwards-compat attribute is true). - - If label is specified, then the control must have this label. Note - that radio controls and checkboxes never have labels: their items do. - - """ - if ((name is None) and (type is None) and (kind is None) and - (id is None) and (label is None) and (predicate is None) and - (nr is None)): - raise ValueError( - "at least one argument must be supplied to specify control") - return self._find_control(name, type, kind, id, label, predicate, nr) - -#--------------------------------------------------- -# Private methods. - - def _find_list_control(self, - name=None, type=None, kind=None, id=None, - label=None, nr=None): - if ((name is None) and (type is None) and (kind is None) and - (id is None) and (label is None) and (nr is None)): - raise ValueError( - "at least one argument must be supplied to specify control") - - return self._find_control(name, type, kind, id, label, - is_listcontrol, nr) - - def _find_control(self, name, type, kind, id, label, predicate, nr): - if ((name is not None) and (name is not Missing) and - not isstringlike(name)): - raise TypeError("control name must be string-like") - if (type is not None) and not isstringlike(type): - raise TypeError("control type must be string-like") - if (kind is not None) and not isstringlike(kind): - raise TypeError("control kind must be string-like") - if (id is not None) and not isstringlike(id): - raise TypeError("control id must be string-like") - if (label is not None) and not isstringlike(label): - raise TypeError("control label must be string-like") - if (predicate is not None) and not callable(predicate): - raise TypeError("control predicate must be callable") - if (nr is not None) and nr < 0: - raise ValueError("control number must be a positive integer") - - orig_nr = nr - found = None - ambiguous = False - if nr is None and self.backwards_compat: - nr = 0 - - for control in self.controls: - if ((name is not None and name != control.name) and - (name is not Missing or control.name is not None)): - continue - if type is not None and type != control.type: - continue - if kind is not None and not control.is_of_kind(kind): - continue - if id is not None and id != control.id: - continue - if predicate and not predicate(control): - continue - if label: - for l in control.get_labels(): - if l.text.find(label) > -1: - break - else: - continue - if nr is not None: - if nr == 0: - return control # early exit: unambiguous due to nr - nr -= 1 - continue - if found: - ambiguous = True - break - found = control - - if found and not ambiguous: - return found - - description = [] - if name is not None: description.append("name %s" % repr(name)) - if type is not None: description.append("type '%s'" % type) - if kind is not None: description.append("kind '%s'" % kind) - if id is not None: description.append("id '%s'" % id) - if label is not None: description.append("label '%s'" % label) - if predicate is not None: - description.append("predicate %s" % predicate) - if orig_nr: description.append("nr %d" % orig_nr) - description = ", ".join(description) - - if ambiguous: - raise AmbiguityError("more than one control matching "+description) - elif not found: - raise ControlNotFoundError("no control matching "+description) - assert False - - def _click(self, name, type, id, label, nr, coord, return_type, - request_class=_request.Request): - try: - control = self._find_control( - name, type, "clickable", id, label, None, nr) - except ControlNotFoundError: - if ((name is not None) or (type is not None) or (id is not None) or - (label is not None) or (nr != 0)): - raise - # no clickable controls, but no control was explicitly requested, - # so return state without clicking any control - return self._switch_click(return_type, request_class) - else: - return control._click(self, coord, return_type, request_class) - - def _pairs(self): - """Return sequence of (key, value) pairs suitable for urlencoding.""" - return [(k, v) for (i, k, v, c_i) in self._pairs_and_controls()] - - - def _pairs_and_controls(self): - """Return sequence of (index, key, value, control_index) - of totally ordered pairs suitable for urlencoding. - - control_index is the index of the control in self.controls - """ - pairs = [] - for control_index in range(len(self.controls)): - control = self.controls[control_index] - for ii, key, val in control._totally_ordered_pairs(): - pairs.append((ii, key, val, control_index)) - - # stable sort by ONLY first item in tuple - pairs.sort() - - return pairs - - def _request_data(self): - """Return a tuple (url, data, headers).""" - method = self.method.upper() - #scheme, netloc, path, parameters, query, frag = urlparse.urlparse(self.action) - parts = self._urlparse(self.action) - rest, (query, frag) = parts[:-2], parts[-2:] - - if method == "GET": - if self.enctype != "application/x-www-form-urlencoded": - raise ValueError( - "unknown GET form encoding type '%s'" % self.enctype) - parts = rest + (urllib.urlencode(self._pairs()), None) - uri = self._urlunparse(parts) - return uri, None, [] - elif method == "POST": - parts = rest + (query, None) - uri = self._urlunparse(parts) - if self.enctype == "application/x-www-form-urlencoded": - return (uri, urllib.urlencode(self._pairs()), - [("Content-Type", self.enctype)]) - elif self.enctype == "multipart/form-data": - data = StringIO() - http_hdrs = [] - mw = MimeWriter(data, http_hdrs) - mw.startmultipartbody("form-data", add_to_http_hdrs=True, - prefix=0) - for ii, k, v, control_index in self._pairs_and_controls(): - self.controls[control_index]._write_mime_data(mw, k, v) - mw.lastpart() - return uri, data.getvalue(), http_hdrs - else: - raise ValueError( - "unknown POST form encoding type '%s'" % self.enctype) - else: - raise ValueError("Unknown method '%s'" % method) - - def _switch_click(self, return_type, request_class=_request.Request): - # This is called by HTMLForm and clickable Controls to hide switching - # on return_type. - if return_type == "pairs": - return self._pairs() - elif return_type == "request_data": - return self._request_data() - else: - req_data = self._request_data() - req = request_class(req_data[0], req_data[1]) - for key, val in req_data[2]: - add_hdr = req.add_header - if key.lower() == "content-type": - try: - add_hdr = req.add_unredirected_header - except AttributeError: - # pre-2.4 and not using ClientCookie - pass - add_hdr(key, val) - return req diff --git a/plugin.video.alfa/lib/mechanize/_gzip.py b/plugin.video.alfa/lib/mechanize/_gzip.py deleted file mode 100755 index 03229326..00000000 --- a/plugin.video.alfa/lib/mechanize/_gzip.py +++ /dev/null @@ -1,105 +0,0 @@ -from cStringIO import StringIO - -import _response -import _urllib2_fork - - -# GzipConsumer was taken from Fredrik Lundh's effbot.org-0.1-20041009 library -class GzipConsumer: - - def __init__(self, consumer): - self.__consumer = consumer - self.__decoder = None - self.__data = "" - - def __getattr__(self, key): - return getattr(self.__consumer, key) - - def feed(self, data): - if self.__decoder is None: - # check if we have a full gzip header - data = self.__data + data - try: - i = 10 - flag = ord(data[3]) - if flag & 4: # extra - x = ord(data[i]) + 256*ord(data[i+1]) - i = i + 2 + x - if flag & 8: # filename - while ord(data[i]): - i = i + 1 - i = i + 1 - if flag & 16: # comment - while ord(data[i]): - i = i + 1 - i = i + 1 - if flag & 2: # crc - i = i + 2 - if len(data) < i: - raise IndexError("not enough data") - if data[:3] != "\x1f\x8b\x08": - raise IOError("invalid gzip data") - data = data[i:] - except IndexError: - self.__data = data - return # need more data - import zlib - self.__data = "" - self.__decoder = zlib.decompressobj(-zlib.MAX_WBITS) - data = self.__decoder.decompress(data) - if data: - self.__consumer.feed(data) - - def close(self): - if self.__decoder: - data = self.__decoder.flush() - if data: - self.__consumer.feed(data) - self.__consumer.close() - - -# -------------------------------------------------------------------- - -# the rest of this module is John Lee's stupid code, not -# Fredrik's nice code :-) - -class stupid_gzip_consumer: - def __init__(self): self.data = [] - def feed(self, data): self.data.append(data) - -class stupid_gzip_wrapper(_response.closeable_response): - def __init__(self, response): - self._response = response - - c = stupid_gzip_consumer() - gzc = GzipConsumer(c) - gzc.feed(response.read()) - self.__data = StringIO("".join(c.data)) - - def read(self, size=-1): - return self.__data.read(size) - def readline(self, size=-1): - return self.__data.readline(size) - def readlines(self, sizehint=-1): - return self.__data.readlines(sizehint) - - def __getattr__(self, name): - # delegate unknown methods/attributes - return getattr(self._response, name) - -class HTTPGzipProcessor(_urllib2_fork.BaseHandler): - handler_order = 200 # response processing before HTTPEquivProcessor - - def http_request(self, request): - request.add_header("Accept-Encoding", "gzip") - return request - - def http_response(self, request, response): - # post-process response - enc_hdrs = response.info().getheaders("Content-encoding") - for enc_hdr in enc_hdrs: - if ("gzip" in enc_hdr) or ("compress" in enc_hdr): - return stupid_gzip_wrapper(response) - return response - - https_response = http_response diff --git a/plugin.video.alfa/lib/mechanize/_headersutil.py b/plugin.video.alfa/lib/mechanize/_headersutil.py deleted file mode 100755 index 102eb008..00000000 --- a/plugin.video.alfa/lib/mechanize/_headersutil.py +++ /dev/null @@ -1,241 +0,0 @@ -"""Utility functions for HTTP header value parsing and construction. - -Copyright 1997-1998, Gisle Aas -Copyright 2002-2006, John J. Lee - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -import os, re -from types import StringType -from types import UnicodeType -STRING_TYPES = StringType, UnicodeType - -from _util import http2time -import _rfc3986 - - -def is_html_file_extension(url, allow_xhtml): - ext = os.path.splitext(_rfc3986.urlsplit(url)[2])[1] - html_exts = [".htm", ".html"] - if allow_xhtml: - html_exts += [".xhtml"] - return ext in html_exts - - -def is_html(ct_headers, url, allow_xhtml=False): - """ - ct_headers: Sequence of Content-Type headers - url: Response URL - - """ - if not ct_headers: - return is_html_file_extension(url, allow_xhtml) - headers = split_header_words(ct_headers) - if len(headers) < 1: - return is_html_file_extension(url, allow_xhtml) - first_header = headers[0] - first_parameter = first_header[0] - ct = first_parameter[0] - html_types = ["text/html"] - if allow_xhtml: - html_types += [ - "text/xhtml", "text/xml", - "application/xml", "application/xhtml+xml", - ] - return ct in html_types - - -def unmatched(match): - """Return unmatched part of re.Match object.""" - start, end = match.span(0) - return match.string[:start]+match.string[end:] - -token_re = re.compile(r"^\s*([^=\s;,]+)") -quoted_value_re = re.compile(r"^\s*=\s*\"([^\"\\]*(?:\\.[^\"\\]*)*)\"") -value_re = re.compile(r"^\s*=\s*([^\s;,]*)") -escape_re = re.compile(r"\\(.)") -def split_header_words(header_values): - r"""Parse header values into a list of lists containing key,value pairs. - - The function knows how to deal with ",", ";" and "=" as well as quoted - values after "=". A list of space separated tokens are parsed as if they - were separated by ";". - - If the header_values passed as argument contains multiple values, then they - are treated as if they were a single value separated by comma ",". - - This means that this function is useful for parsing header fields that - follow this syntax (BNF as from the HTTP/1.1 specification, but we relax - the requirement for tokens). - - headers = #header - header = (token | parameter) *( [";"] (token | parameter)) - - token = 1*<any CHAR except CTLs or separators> - separators = "(" | ")" | "<" | ">" | "@" - | "," | ";" | ":" | "\" | <"> - | "/" | "[" | "]" | "?" | "=" - | "{" | "}" | SP | HT - - quoted-string = ( <"> *(qdtext | quoted-pair ) <"> ) - qdtext = <any TEXT except <">> - quoted-pair = "\" CHAR - - parameter = attribute "=" value - attribute = token - value = token | quoted-string - - Each header is represented by a list of key/value pairs. The value for a - simple token (not part of a parameter) is None. Syntactically incorrect - headers will not necessarily be parsed as you would want. - - This is easier to describe with some examples: - - >>> split_header_words(['foo="bar"; port="80,81"; discard, bar=baz']) - [[('foo', 'bar'), ('port', '80,81'), ('discard', None)], [('bar', 'baz')]] - >>> split_header_words(['text/html; charset="iso-8859-1"']) - [[('text/html', None), ('charset', 'iso-8859-1')]] - >>> split_header_words([r'Basic realm="\"foo\bar\""']) - [[('Basic', None), ('realm', '"foobar"')]] - - """ - assert type(header_values) not in STRING_TYPES - result = [] - for text in header_values: - orig_text = text - pairs = [] - while text: - m = token_re.search(text) - if m: - text = unmatched(m) - name = m.group(1) - m = quoted_value_re.search(text) - if m: # quoted value - text = unmatched(m) - value = m.group(1) - value = escape_re.sub(r"\1", value) - else: - m = value_re.search(text) - if m: # unquoted value - text = unmatched(m) - value = m.group(1) - value = value.rstrip() - else: - # no value, a lone token - value = None - pairs.append((name, value)) - elif text.lstrip().startswith(","): - # concatenated headers, as per RFC 2616 section 4.2 - text = text.lstrip()[1:] - if pairs: result.append(pairs) - pairs = [] - else: - # skip junk - non_junk, nr_junk_chars = re.subn("^[=\s;]*", "", text) - assert nr_junk_chars > 0, ( - "split_header_words bug: '%s', '%s', %s" % - (orig_text, text, pairs)) - text = non_junk - if pairs: result.append(pairs) - return result - -join_escape_re = re.compile(r"([\"\\])") -def join_header_words(lists): - """Do the inverse of the conversion done by split_header_words. - - Takes a list of lists of (key, value) pairs and produces a single header - value. Attribute values are quoted if needed. - - >>> join_header_words([[("text/plain", None), ("charset", "iso-8859/1")]]) - 'text/plain; charset="iso-8859/1"' - >>> join_header_words([[("text/plain", None)], [("charset", "iso-8859/1")]]) - 'text/plain, charset="iso-8859/1"' - - """ - headers = [] - for pairs in lists: - attr = [] - for k, v in pairs: - if v is not None: - if not re.search(r"^\w+$", v): - v = join_escape_re.sub(r"\\\1", v) # escape " and \ - v = '"%s"' % v - if k is None: # Netscape cookies may have no name - k = v - else: - k = "%s=%s" % (k, v) - attr.append(k) - if attr: headers.append("; ".join(attr)) - return ", ".join(headers) - -def strip_quotes(text): - if text.startswith('"'): - text = text[1:] - if text.endswith('"'): - text = text[:-1] - return text - -def parse_ns_headers(ns_headers): - """Ad-hoc parser for Netscape protocol cookie-attributes. - - The old Netscape cookie format for Set-Cookie can for instance contain - an unquoted "," in the expires field, so we have to use this ad-hoc - parser instead of split_header_words. - - XXX This may not make the best possible effort to parse all the crap - that Netscape Cookie headers contain. Ronald Tschalar's HTTPClient - parser is probably better, so could do worse than following that if - this ever gives any trouble. - - Currently, this is also used for parsing RFC 2109 cookies. - - """ - known_attrs = ("expires", "domain", "path", "secure", - # RFC 2109 attrs (may turn up in Netscape cookies, too) - "version", "port", "max-age") - - result = [] - for ns_header in ns_headers: - pairs = [] - version_set = False - params = re.split(r";\s*", ns_header) - for ii in range(len(params)): - param = params[ii] - param = param.rstrip() - if param == "": continue - if "=" not in param: - k, v = param, None - else: - k, v = re.split(r"\s*=\s*", param, 1) - k = k.lstrip() - if ii != 0: - lc = k.lower() - if lc in known_attrs: - k = lc - if k == "version": - # This is an RFC 2109 cookie. - v = strip_quotes(v) - version_set = True - if k == "expires": - # convert expires date to seconds since epoch - v = http2time(strip_quotes(v)) # None if invalid - pairs.append((k, v)) - - if pairs: - if not version_set: - pairs.append(("version", "0")) - result.append(pairs) - - return result - - -def _test(): - import doctest, _headersutil - return doctest.testmod(_headersutil) - -if __name__ == "__main__": - _test() diff --git a/plugin.video.alfa/lib/mechanize/_html.py b/plugin.video.alfa/lib/mechanize/_html.py deleted file mode 100755 index 745f290c..00000000 --- a/plugin.video.alfa/lib/mechanize/_html.py +++ /dev/null @@ -1,629 +0,0 @@ -"""HTML handling. - -Copyright 2003-2006 John J. Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it under -the terms of the BSD or ZPL 2.1 licenses (see the file COPYING.txt -included with the distribution). - -""" - -import codecs -import copy -import htmlentitydefs -import re - -import _sgmllib_copy as sgmllib - -import _beautifulsoup -import _form -from _headersutil import split_header_words, is_html as _is_html -import _request -import _rfc3986 - -DEFAULT_ENCODING = "latin-1" - -COMPRESS_RE = re.compile(r"\s+") - - -class CachingGeneratorFunction(object): - """Caching wrapper around a no-arguments iterable.""" - - def __init__(self, iterable): - self._cache = [] - # wrap iterable to make it non-restartable (otherwise, repeated - # __call__ would give incorrect results) - self._iterator = iter(iterable) - - def __call__(self): - cache = self._cache - for item in cache: - yield item - for item in self._iterator: - cache.append(item) - yield item - - -class EncodingFinder: - def __init__(self, default_encoding): - self._default_encoding = default_encoding - def encoding(self, response): - # HTTPEquivProcessor may be in use, so both HTTP and HTTP-EQUIV - # headers may be in the response. HTTP-EQUIV headers come last, - # so try in order from first to last. - for ct in response.info().getheaders("content-type"): - for k, v in split_header_words([ct])[0]: - if k == "charset": - encoding = v - try: - codecs.lookup(v) - except LookupError: - continue - else: - return encoding - return self._default_encoding - - -class ResponseTypeFinder: - def __init__(self, allow_xhtml): - self._allow_xhtml = allow_xhtml - def is_html(self, response, encoding): - ct_hdrs = response.info().getheaders("content-type") - url = response.geturl() - # XXX encoding - return _is_html(ct_hdrs, url, self._allow_xhtml) - - -class Args(object): - - # idea for this argument-processing trick is from Peter Otten - - def __init__(self, args_map): - self.__dict__["dictionary"] = dict(args_map) - - def __getattr__(self, key): - try: - return self.dictionary[key] - except KeyError: - return getattr(self.__class__, key) - - def __setattr__(self, key, value): - if key == "dictionary": - raise AttributeError() - self.dictionary[key] = value - - -def form_parser_args( - select_default=False, - form_parser_class=None, - request_class=None, - backwards_compat=False, - ): - return Args(locals()) - - -class Link: - def __init__(self, base_url, url, text, tag, attrs): - assert None not in [url, tag, attrs] - self.base_url = base_url - self.absolute_url = _rfc3986.urljoin(base_url, url) - self.url, self.text, self.tag, self.attrs = url, text, tag, attrs - def __cmp__(self, other): - try: - for name in "url", "text", "tag", "attrs": - if getattr(self, name) != getattr(other, name): - return -1 - except AttributeError: - return -1 - return 0 - def __repr__(self): - return "Link(base_url=%r, url=%r, text=%r, tag=%r, attrs=%r)" % ( - self.base_url, self.url, self.text, self.tag, self.attrs) - - -class LinksFactory: - - def __init__(self, - link_parser_class=None, - link_class=Link, - urltags=None, - ): - import _pullparser - if link_parser_class is None: - link_parser_class = _pullparser.TolerantPullParser - self.link_parser_class = link_parser_class - self.link_class = link_class - if urltags is None: - urltags = { - "a": "href", - "area": "href", - "frame": "src", - "iframe": "src", - } - self.urltags = urltags - self._response = None - self._encoding = None - - def set_response(self, response, base_url, encoding): - self._response = response - self._encoding = encoding - self._base_url = base_url - - def links(self): - """Return an iterator that provides links of the document.""" - response = self._response - encoding = self._encoding - base_url = self._base_url - p = self.link_parser_class(response, encoding=encoding) - - try: - for token in p.tags(*(self.urltags.keys()+["base"])): - if token.type == "endtag": - continue - if token.data == "base": - base_href = dict(token.attrs).get("href") - if base_href is not None: - base_url = base_href - continue - attrs = dict(token.attrs) - tag = token.data - text = None - # XXX use attr_encoding for ref'd doc if that doc does not - # provide one by other means - #attr_encoding = attrs.get("charset") - url = attrs.get(self.urltags[tag]) # XXX is "" a valid URL? - if not url: - # Probably an <A NAME="blah"> link or <AREA NOHREF...>. - # For our purposes a link is something with a URL, so - # ignore this. - continue - - url = _rfc3986.clean_url(url, encoding) - if tag == "a": - if token.type != "startendtag": - # hmm, this'd break if end tag is missing - text = p.get_compressed_text(("endtag", tag)) - # but this doesn't work for e.g. - # <a href="blah"><b>Andy</b></a> - #text = p.get_compressed_text() - - yield Link(base_url, url, text, tag, token.attrs) - except sgmllib.SGMLParseError, exc: - raise _form.ParseError(exc) - -class FormsFactory: - - """Makes a sequence of objects satisfying HTMLForm interface. - - After calling .forms(), the .global_form attribute is a form object - containing all controls not a descendant of any FORM element. - - For constructor argument docs, see ParseResponse argument docs. - """ - - def __init__(self, - select_default=False, - form_parser_class=None, - request_class=None, - backwards_compat=False, - ): - self.select_default = select_default - if form_parser_class is None: - form_parser_class = _form.FormParser - self.form_parser_class = form_parser_class - if request_class is None: - request_class = _request.Request - self.request_class = request_class - self.backwards_compat = backwards_compat - self._response = None - self.encoding = None - self.global_form = None - - def set_response(self, response, encoding): - self._response = response - self.encoding = encoding - self.global_form = None - - def forms(self): - encoding = self.encoding - forms = _form.ParseResponseEx( - self._response, - select_default=self.select_default, - form_parser_class=self.form_parser_class, - request_class=self.request_class, - encoding=encoding, - _urljoin=_rfc3986.urljoin, - _urlparse=_rfc3986.urlsplit, - _urlunparse=_rfc3986.urlunsplit, - ) - self.global_form = forms[0] - return forms[1:] - -class TitleFactory: - def __init__(self): - self._response = self._encoding = None - - def set_response(self, response, encoding): - self._response = response - self._encoding = encoding - - def _get_title_text(self, parser): - import _pullparser - text = [] - tok = None - while 1: - try: - tok = parser.get_token() - except _pullparser.NoMoreTokensError: - break - if tok.type == "data": - text.append(str(tok)) - elif tok.type == "entityref": - t = unescape("&%s;" % tok.data, - parser._entitydefs, parser.encoding) - text.append(t) - elif tok.type == "charref": - t = unescape_charref(tok.data, parser.encoding) - text.append(t) - elif tok.type in ["starttag", "endtag", "startendtag"]: - tag_name = tok.data - if tok.type == "endtag" and tag_name == "title": - break - text.append(str(tok)) - return COMPRESS_RE.sub(" ", "".join(text).strip()) - - def title(self): - import _pullparser - p = _pullparser.TolerantPullParser( - self._response, encoding=self._encoding) - try: - try: - p.get_tag("title") - except _pullparser.NoMoreTokensError: - return None - else: - return self._get_title_text(p) - except sgmllib.SGMLParseError, exc: - raise _form.ParseError(exc) - - -def unescape(data, entities, encoding): - if data is None or "&" not in data: - return data - - def replace_entities(match): - ent = match.group() - if ent[1] == "#": - return unescape_charref(ent[2:-1], encoding) - - repl = entities.get(ent[1:-1]) - if repl is not None: - repl = unichr(repl) - if type(repl) != type(""): - try: - repl = repl.encode(encoding) - except UnicodeError: - repl = ent - else: - repl = ent - return repl - - return re.sub(r"&#?[A-Za-z0-9]+?;", replace_entities, data) - -def unescape_charref(data, encoding): - name, base = data, 10 - if name.startswith("x"): - name, base= name[1:], 16 - uc = unichr(int(name, base)) - if encoding is None: - return uc - else: - try: - repl = uc.encode(encoding) - except UnicodeError: - repl = "&#%s;" % data - return repl - - -class MechanizeBs(_beautifulsoup.BeautifulSoup): - _entitydefs = htmlentitydefs.name2codepoint - # don't want the magic Microsoft-char workaround - PARSER_MASSAGE = [(re.compile('(<[^<>]*)/>'), - lambda(x):x.group(1) + ' />'), - (re.compile('<!\s+([^<>]*)>'), - lambda(x):'<!' + x.group(1) + '>') - ] - - def __init__(self, encoding, text=None, avoidParserProblems=True, - initialTextIsEverything=True): - self._encoding = encoding - _beautifulsoup.BeautifulSoup.__init__( - self, text, avoidParserProblems, initialTextIsEverything) - - def handle_charref(self, ref): - t = unescape("&#%s;"%ref, self._entitydefs, self._encoding) - self.handle_data(t) - def handle_entityref(self, ref): - t = unescape("&%s;"%ref, self._entitydefs, self._encoding) - self.handle_data(t) - def unescape_attrs(self, attrs): - escaped_attrs = [] - for key, val in attrs: - val = unescape(val, self._entitydefs, self._encoding) - escaped_attrs.append((key, val)) - return escaped_attrs - -class RobustLinksFactory: - - compress_re = COMPRESS_RE - - def __init__(self, - link_parser_class=None, - link_class=Link, - urltags=None, - ): - if link_parser_class is None: - link_parser_class = MechanizeBs - self.link_parser_class = link_parser_class - self.link_class = link_class - if urltags is None: - urltags = { - "a": "href", - "area": "href", - "frame": "src", - "iframe": "src", - } - self.urltags = urltags - self._bs = None - self._encoding = None - self._base_url = None - - def set_soup(self, soup, base_url, encoding): - self._bs = soup - self._base_url = base_url - self._encoding = encoding - - def links(self): - bs = self._bs - base_url = self._base_url - encoding = self._encoding - for ch in bs.recursiveChildGenerator(): - if (isinstance(ch, _beautifulsoup.Tag) and - ch.name in self.urltags.keys()+["base"]): - link = ch - attrs = bs.unescape_attrs(link.attrs) - attrs_dict = dict(attrs) - if link.name == "base": - base_href = attrs_dict.get("href") - if base_href is not None: - base_url = base_href - continue - url_attr = self.urltags[link.name] - url = attrs_dict.get(url_attr) - if not url: - continue - url = _rfc3986.clean_url(url, encoding) - text = link.fetchText(lambda t: True) - if not text: - # follow _pullparser's weird behaviour rigidly - if link.name == "a": - text = "" - else: - text = None - else: - text = self.compress_re.sub(" ", " ".join(text).strip()) - yield Link(base_url, url, text, link.name, attrs) - - -class RobustFormsFactory(FormsFactory): - def __init__(self, *args, **kwds): - args = form_parser_args(*args, **kwds) - if args.form_parser_class is None: - args.form_parser_class = _form.RobustFormParser - FormsFactory.__init__(self, **args.dictionary) - - def set_response(self, response, encoding): - self._response = response - self.encoding = encoding - - -class RobustTitleFactory: - def __init__(self): - self._bs = self._encoding = None - - def set_soup(self, soup, encoding): - self._bs = soup - self._encoding = encoding - - def title(self): - title = self._bs.first("title") - if title == _beautifulsoup.Null: - return None - else: - inner_html = "".join([str(node) for node in title.contents]) - return COMPRESS_RE.sub(" ", inner_html.strip()) - - -class Factory: - """Factory for forms, links, etc. - - This interface may expand in future. - - Public methods: - - set_request_class(request_class) - set_response(response) - forms() - links() - - Public attributes: - - Note that accessing these attributes may raise ParseError. - - encoding: string specifying the encoding of response if it contains a text - document (this value is left unspecified for documents that do not have - an encoding, e.g. an image file) - is_html: true if response contains an HTML document (XHTML may be - regarded as HTML too) - title: page title, or None if no title or not HTML - global_form: form object containing all controls that are not descendants - of any FORM element, or None if the forms_factory does not support - supplying a global form - - """ - - LAZY_ATTRS = ["encoding", "is_html", "title", "global_form"] - - def __init__(self, forms_factory, links_factory, title_factory, - encoding_finder=EncodingFinder(DEFAULT_ENCODING), - response_type_finder=ResponseTypeFinder(allow_xhtml=False), - ): - """ - - Pass keyword arguments only. - - default_encoding: character encoding to use if encoding cannot be - determined (or guessed) from the response. You should turn on - HTTP-EQUIV handling if you want the best chance of getting this right - without resorting to this default. The default value of this - parameter (currently latin-1) may change in future. - - """ - self._forms_factory = forms_factory - self._links_factory = links_factory - self._title_factory = title_factory - self._encoding_finder = encoding_finder - self._response_type_finder = response_type_finder - - self.set_response(None) - - def set_request_class(self, request_class): - """Set request class (mechanize.Request by default). - - HTMLForm instances returned by .forms() will return instances of this - class when .click()ed. - - """ - self._forms_factory.request_class = request_class - - def set_response(self, response): - """Set response. - - The response must either be None or implement the same interface as - objects returned by mechanize.urlopen(). - - """ - self._response = response - self._forms_genf = self._links_genf = None - self._get_title = None - for name in self.LAZY_ATTRS: - try: - delattr(self, name) - except AttributeError: - pass - - def __getattr__(self, name): - if name not in self.LAZY_ATTRS: - return getattr(self.__class__, name) - - if name == "encoding": - self.encoding = self._encoding_finder.encoding( - copy.copy(self._response)) - return self.encoding - elif name == "is_html": - self.is_html = self._response_type_finder.is_html( - copy.copy(self._response), self.encoding) - return self.is_html - elif name == "title": - if self.is_html: - self.title = self._title_factory.title() - else: - self.title = None - return self.title - elif name == "global_form": - self.forms() - return self.global_form - - def forms(self): - """Return iterable over HTMLForm-like objects. - - Raises mechanize.ParseError on failure. - """ - # this implementation sets .global_form as a side-effect, for benefit - # of __getattr__ impl - if self._forms_genf is None: - try: - self._forms_genf = CachingGeneratorFunction( - self._forms_factory.forms()) - except: # XXXX define exception! - self.set_response(self._response) - raise - self.global_form = getattr( - self._forms_factory, "global_form", None) - return self._forms_genf() - - def links(self): - """Return iterable over mechanize.Link-like objects. - - Raises mechanize.ParseError on failure. - """ - if self._links_genf is None: - try: - self._links_genf = CachingGeneratorFunction( - self._links_factory.links()) - except: # XXXX define exception! - self.set_response(self._response) - raise - return self._links_genf() - -class DefaultFactory(Factory): - """Based on sgmllib.""" - def __init__(self, i_want_broken_xhtml_support=False): - Factory.__init__( - self, - forms_factory=FormsFactory(), - links_factory=LinksFactory(), - title_factory=TitleFactory(), - response_type_finder=ResponseTypeFinder( - allow_xhtml=i_want_broken_xhtml_support), - ) - - def set_response(self, response): - Factory.set_response(self, response) - if response is not None: - self._forms_factory.set_response( - copy.copy(response), self.encoding) - self._links_factory.set_response( - copy.copy(response), response.geturl(), self.encoding) - self._title_factory.set_response( - copy.copy(response), self.encoding) - -class RobustFactory(Factory): - """Based on BeautifulSoup, hopefully a bit more robust to bad HTML than is - DefaultFactory. - - """ - def __init__(self, i_want_broken_xhtml_support=False, - soup_class=None): - Factory.__init__( - self, - forms_factory=RobustFormsFactory(), - links_factory=RobustLinksFactory(), - title_factory=RobustTitleFactory(), - response_type_finder=ResponseTypeFinder( - allow_xhtml=i_want_broken_xhtml_support), - ) - if soup_class is None: - soup_class = MechanizeBs - self._soup_class = soup_class - - def set_response(self, response): - Factory.set_response(self, response) - if response is not None: - data = response.read() - soup = self._soup_class(self.encoding, data) - self._forms_factory.set_response( - copy.copy(response), self.encoding) - self._links_factory.set_soup( - soup, response.geturl(), self.encoding) - self._title_factory.set_soup(soup, self.encoding) diff --git a/plugin.video.alfa/lib/mechanize/_http.py b/plugin.video.alfa/lib/mechanize/_http.py deleted file mode 100755 index c61f9c35..00000000 --- a/plugin.video.alfa/lib/mechanize/_http.py +++ /dev/null @@ -1,447 +0,0 @@ -"""HTTP related handlers. - -Note that some other HTTP handlers live in more specific modules: _auth.py, -_gzip.py, etc. - - -Copyright 2002-2006 John J Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -import HTMLParser -from cStringIO import StringIO -import htmlentitydefs -import logging -import robotparser -import socket -import time - -import _sgmllib_copy as sgmllib -from _urllib2_fork import HTTPError, BaseHandler - -from _headersutil import is_html -from _html import unescape, unescape_charref -from _request import Request -from _response import response_seek_wrapper -import _rfc3986 -import _sockettimeout - -debug = logging.getLogger("mechanize").debug -debug_robots = logging.getLogger("mechanize.robots").debug - -# monkeypatch urllib2.HTTPError to show URL -## import urllib2 -## def urllib2_str(self): -## return 'HTTP Error %s: %s (%s)' % ( -## self.code, self.msg, self.geturl()) -## urllib2.HTTPError.__str__ = urllib2_str - - -CHUNK = 1024 # size of chunks fed to HTML HEAD parser, in bytes -DEFAULT_ENCODING = 'latin-1' - -# XXX would self.reset() work, instead of raising this exception? -class EndOfHeadError(Exception): pass -class AbstractHeadParser: - # only these elements are allowed in or before HEAD of document - head_elems = ("html", "head", - "title", "base", - "script", "style", "meta", "link", "object") - _entitydefs = htmlentitydefs.name2codepoint - _encoding = DEFAULT_ENCODING - - def __init__(self): - self.http_equiv = [] - - def start_meta(self, attrs): - http_equiv = content = None - for key, value in attrs: - if key == "http-equiv": - http_equiv = self.unescape_attr_if_required(value) - elif key == "content": - content = self.unescape_attr_if_required(value) - if http_equiv is not None and content is not None: - self.http_equiv.append((http_equiv, content)) - - def end_head(self): - raise EndOfHeadError() - - def handle_entityref(self, name): - #debug("%s", name) - self.handle_data(unescape( - '&%s;' % name, self._entitydefs, self._encoding)) - - def handle_charref(self, name): - #debug("%s", name) - self.handle_data(unescape_charref(name, self._encoding)) - - def unescape_attr(self, name): - #debug("%s", name) - return unescape(name, self._entitydefs, self._encoding) - - def unescape_attrs(self, attrs): - #debug("%s", attrs) - escaped_attrs = {} - for key, val in attrs.items(): - escaped_attrs[key] = self.unescape_attr(val) - return escaped_attrs - - def unknown_entityref(self, ref): - self.handle_data("&%s;" % ref) - - def unknown_charref(self, ref): - self.handle_data("&#%s;" % ref) - - -class XHTMLCompatibleHeadParser(AbstractHeadParser, - HTMLParser.HTMLParser): - def __init__(self): - HTMLParser.HTMLParser.__init__(self) - AbstractHeadParser.__init__(self) - - def handle_starttag(self, tag, attrs): - if tag not in self.head_elems: - raise EndOfHeadError() - try: - method = getattr(self, 'start_' + tag) - except AttributeError: - try: - method = getattr(self, 'do_' + tag) - except AttributeError: - pass # unknown tag - else: - method(attrs) - else: - method(attrs) - - def handle_endtag(self, tag): - if tag not in self.head_elems: - raise EndOfHeadError() - try: - method = getattr(self, 'end_' + tag) - except AttributeError: - pass # unknown tag - else: - method() - - def unescape(self, name): - # Use the entitydefs passed into constructor, not - # HTMLParser.HTMLParser's entitydefs. - return self.unescape_attr(name) - - def unescape_attr_if_required(self, name): - return name # HTMLParser.HTMLParser already did it - -class HeadParser(AbstractHeadParser, sgmllib.SGMLParser): - - def _not_called(self): - assert False - - def __init__(self): - sgmllib.SGMLParser.__init__(self) - AbstractHeadParser.__init__(self) - - def handle_starttag(self, tag, method, attrs): - if tag not in self.head_elems: - raise EndOfHeadError() - if tag == "meta": - method(attrs) - - def unknown_starttag(self, tag, attrs): - self.handle_starttag(tag, self._not_called, attrs) - - def handle_endtag(self, tag, method): - if tag in self.head_elems: - method() - else: - raise EndOfHeadError() - - def unescape_attr_if_required(self, name): - return self.unescape_attr(name) - -def parse_head(fileobj, parser): - """Return a list of key, value pairs.""" - while 1: - data = fileobj.read(CHUNK) - try: - parser.feed(data) - except EndOfHeadError: - break - if len(data) != CHUNK: - # this should only happen if there is no HTML body, or if - # CHUNK is big - break - return parser.http_equiv - -class HTTPEquivProcessor(BaseHandler): - """Append META HTTP-EQUIV headers to regular HTTP headers.""" - - handler_order = 300 # before handlers that look at HTTP headers - - def __init__(self, head_parser_class=HeadParser, - i_want_broken_xhtml_support=False, - ): - self.head_parser_class = head_parser_class - self._allow_xhtml = i_want_broken_xhtml_support - - def http_response(self, request, response): - if not hasattr(response, "seek"): - response = response_seek_wrapper(response) - http_message = response.info() - url = response.geturl() - ct_hdrs = http_message.getheaders("content-type") - if is_html(ct_hdrs, url, self._allow_xhtml): - try: - try: - html_headers = parse_head(response, - self.head_parser_class()) - finally: - response.seek(0) - except (HTMLParser.HTMLParseError, - sgmllib.SGMLParseError): - pass - else: - for hdr, val in html_headers: - # add a header - http_message.dict[hdr.lower()] = val - text = hdr + ": " + val - for line in text.split("\n"): - http_message.headers.append(line + "\n") - return response - - https_response = http_response - - -class MechanizeRobotFileParser(robotparser.RobotFileParser): - - def __init__(self, url='', opener=None): - robotparser.RobotFileParser.__init__(self, url) - self._opener = opener - self._timeout = _sockettimeout._GLOBAL_DEFAULT_TIMEOUT - - def set_opener(self, opener=None): - import _opener - if opener is None: - opener = _opener.OpenerDirector() - self._opener = opener - - def set_timeout(self, timeout): - self._timeout = timeout - - def read(self): - """Reads the robots.txt URL and feeds it to the parser.""" - if self._opener is None: - self.set_opener() - req = Request(self.url, unverifiable=True, visit=False, - timeout=self._timeout) - try: - f = self._opener.open(req) - except HTTPError, f: - pass - except (IOError, socket.error, OSError), exc: - debug_robots("ignoring error opening %r: %s" % - (self.url, exc)) - return - lines = [] - line = f.readline() - while line: - lines.append(line.strip()) - line = f.readline() - status = f.code - if status == 401 or status == 403: - self.disallow_all = True - debug_robots("disallow all") - elif status >= 400: - self.allow_all = True - debug_robots("allow all") - elif status == 200 and lines: - debug_robots("parse lines") - self.parse(lines) - -class RobotExclusionError(HTTPError): - def __init__(self, request, *args): - apply(HTTPError.__init__, (self,)+args) - self.request = request - -class HTTPRobotRulesProcessor(BaseHandler): - # before redirections, after everything else - handler_order = 800 - - try: - from httplib import HTTPMessage - except: - from mimetools import Message - http_response_class = Message - else: - http_response_class = HTTPMessage - - def __init__(self, rfp_class=MechanizeRobotFileParser): - self.rfp_class = rfp_class - self.rfp = None - self._host = None - - def http_request(self, request): - scheme = request.get_type() - if scheme not in ["http", "https"]: - # robots exclusion only applies to HTTP - return request - - if request.get_selector() == "/robots.txt": - # /robots.txt is always OK to fetch - return request - - host = request.get_host() - - # robots.txt requests don't need to be allowed by robots.txt :-) - origin_req = getattr(request, "_origin_req", None) - if (origin_req is not None and - origin_req.get_selector() == "/robots.txt" and - origin_req.get_host() == host - ): - return request - - if host != self._host: - self.rfp = self.rfp_class() - try: - self.rfp.set_opener(self.parent) - except AttributeError: - debug("%r instance does not support set_opener" % - self.rfp.__class__) - self.rfp.set_url(scheme+"://"+host+"/robots.txt") - self.rfp.set_timeout(request.timeout) - self.rfp.read() - self._host = host - - ua = request.get_header("User-agent", "") - if self.rfp.can_fetch(ua, request.get_full_url()): - return request - else: - # XXX This should really have raised URLError. Too late now... - msg = "request disallowed by robots.txt" - raise RobotExclusionError( - request, - request.get_full_url(), - 403, msg, - self.http_response_class(StringIO()), StringIO(msg)) - - https_request = http_request - -class HTTPRefererProcessor(BaseHandler): - """Add Referer header to requests. - - This only makes sense if you use each RefererProcessor for a single - chain of requests only (so, for example, if you use a single - HTTPRefererProcessor to fetch a series of URLs extracted from a single - page, this will break). - - There's a proper implementation of this in mechanize.Browser. - - """ - def __init__(self): - self.referer = None - - def http_request(self, request): - if ((self.referer is not None) and - not request.has_header("Referer")): - request.add_unredirected_header("Referer", self.referer) - return request - - def http_response(self, request, response): - self.referer = response.geturl() - return response - - https_request = http_request - https_response = http_response - - -def clean_refresh_url(url): - # e.g. Firefox 1.5 does (something like) this - if ((url.startswith('"') and url.endswith('"')) or - (url.startswith("'") and url.endswith("'"))): - url = url[1:-1] - return _rfc3986.clean_url(url, "latin-1") # XXX encoding - -def parse_refresh_header(refresh): - """ - >>> parse_refresh_header("1; url=http://example.com/") - (1.0, 'http://example.com/') - >>> parse_refresh_header("1; url='http://example.com/'") - (1.0, 'http://example.com/') - >>> parse_refresh_header("1") - (1.0, None) - >>> parse_refresh_header("blah") # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - ValueError: invalid literal for float(): blah - - """ - - ii = refresh.find(";") - if ii != -1: - pause, newurl_spec = float(refresh[:ii]), refresh[ii+1:] - jj = newurl_spec.find("=") - key = None - if jj != -1: - key, newurl = newurl_spec[:jj], newurl_spec[jj+1:] - newurl = clean_refresh_url(newurl) - if key is None or key.strip().lower() != "url": - raise ValueError() - else: - pause, newurl = float(refresh), None - return pause, newurl - -class HTTPRefreshProcessor(BaseHandler): - """Perform HTTP Refresh redirections. - - Note that if a non-200 HTTP code has occurred (for example, a 30x - redirect), this processor will do nothing. - - By default, only zero-time Refresh headers are redirected. Use the - max_time attribute / constructor argument to allow Refresh with longer - pauses. Use the honor_time attribute / constructor argument to control - whether the requested pause is honoured (with a time.sleep()) or - skipped in favour of immediate redirection. - - Public attributes: - - max_time: see above - honor_time: see above - - """ - handler_order = 1000 - - def __init__(self, max_time=0, honor_time=True): - self.max_time = max_time - self.honor_time = honor_time - self._sleep = time.sleep - - def http_response(self, request, response): - code, msg, hdrs = response.code, response.msg, response.info() - - if code == 200 and hdrs.has_key("refresh"): - refresh = hdrs.getheaders("refresh")[0] - try: - pause, newurl = parse_refresh_header(refresh) - except ValueError: - debug("bad Refresh header: %r" % refresh) - return response - - if newurl is None: - newurl = response.geturl() - if (self.max_time is None) or (pause <= self.max_time): - if pause > 1E-3 and self.honor_time: - self._sleep(pause) - hdrs["location"] = newurl - # hardcoded http is NOT a bug - response = self.parent.error( - "http", request, response, - "refresh", msg, hdrs) - else: - debug("Refresh header ignored: %r" % refresh) - - return response - - https_response = http_response diff --git a/plugin.video.alfa/lib/mechanize/_lwpcookiejar.py b/plugin.video.alfa/lib/mechanize/_lwpcookiejar.py deleted file mode 100755 index 703561e4..00000000 --- a/plugin.video.alfa/lib/mechanize/_lwpcookiejar.py +++ /dev/null @@ -1,185 +0,0 @@ -"""Load / save to libwww-perl (LWP) format files. - -Actually, the format is slightly extended from that used by LWP's -(libwww-perl's) HTTP::Cookies, to avoid losing some RFC 2965 information -not recorded by LWP. - -It uses the version string "2.0", though really there isn't an LWP Cookies -2.0 format. This indicates that there is extra information in here -(domain_dot and port_spec) while still being compatible with libwww-perl, -I hope. - -Copyright 2002-2006 John J Lee <jjl@pobox.com> -Copyright 1997-1999 Gisle Aas (original libwww-perl code) - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -import time, re, logging - -from _clientcookie import reraise_unmasked_exceptions, FileCookieJar, Cookie, \ - MISSING_FILENAME_TEXT, LoadError -from _headersutil import join_header_words, split_header_words -from _util import iso2time, time2isoz - -debug = logging.getLogger("mechanize").debug - - -def lwp_cookie_str(cookie): - """Return string representation of Cookie in an the LWP cookie file format. - - Actually, the format is extended a bit -- see module docstring. - - """ - h = [(cookie.name, cookie.value), - ("path", cookie.path), - ("domain", cookie.domain)] - if cookie.port is not None: h.append(("port", cookie.port)) - if cookie.path_specified: h.append(("path_spec", None)) - if cookie.port_specified: h.append(("port_spec", None)) - if cookie.domain_initial_dot: h.append(("domain_dot", None)) - if cookie.secure: h.append(("secure", None)) - if cookie.expires: h.append(("expires", - time2isoz(float(cookie.expires)))) - if cookie.discard: h.append(("discard", None)) - if cookie.comment: h.append(("comment", cookie.comment)) - if cookie.comment_url: h.append(("commenturl", cookie.comment_url)) - if cookie.rfc2109: h.append(("rfc2109", None)) - - keys = cookie.nonstandard_attr_keys() - keys.sort() - for k in keys: - h.append((k, str(cookie.get_nonstandard_attr(k)))) - - h.append(("version", str(cookie.version))) - - return join_header_words([h]) - -class LWPCookieJar(FileCookieJar): - """ - The LWPCookieJar saves a sequence of"Set-Cookie3" lines. - "Set-Cookie3" is the format used by the libwww-perl libary, not known - to be compatible with any browser, but which is easy to read and - doesn't lose information about RFC 2965 cookies. - - Additional methods - - as_lwp_str(ignore_discard=True, ignore_expired=True) - - """ - - magic_re = r"^\#LWP-Cookies-(\d+\.\d+)" - - def as_lwp_str(self, ignore_discard=True, ignore_expires=True): - """Return cookies as a string of "\n"-separated "Set-Cookie3" headers. - - ignore_discard and ignore_expires: see docstring for FileCookieJar.save - - """ - now = time.time() - r = [] - for cookie in self: - if not ignore_discard and cookie.discard: - debug(" Not saving %s: marked for discard", cookie.name) - continue - if not ignore_expires and cookie.is_expired(now): - debug(" Not saving %s: expired", cookie.name) - continue - r.append("Set-Cookie3: %s" % lwp_cookie_str(cookie)) - return "\n".join(r+[""]) - - def save(self, filename=None, ignore_discard=False, ignore_expires=False): - if filename is None: - if self.filename is not None: filename = self.filename - else: raise ValueError(MISSING_FILENAME_TEXT) - - f = open(filename, "w") - try: - debug("Saving LWP cookies file") - # There really isn't an LWP Cookies 2.0 format, but this indicates - # that there is extra information in here (domain_dot and - # port_spec) while still being compatible with libwww-perl, I hope. - f.write("#LWP-Cookies-2.0\n") - f.write(self.as_lwp_str(ignore_discard, ignore_expires)) - finally: - f.close() - - def _really_load(self, f, filename, ignore_discard, ignore_expires): - magic = f.readline() - if not re.search(self.magic_re, magic): - msg = "%s does not seem to contain cookies" % filename - raise LoadError(msg) - - now = time.time() - - header = "Set-Cookie3:" - boolean_attrs = ("port_spec", "path_spec", "domain_dot", - "secure", "discard", "rfc2109") - value_attrs = ("version", - "port", "path", "domain", - "expires", - "comment", "commenturl") - - try: - while 1: - line = f.readline() - if line == "": break - if not line.startswith(header): - continue - line = line[len(header):].strip() - - for data in split_header_words([line]): - name, value = data[0] - standard = {} - rest = {} - for k in boolean_attrs: - standard[k] = False - for k, v in data[1:]: - if k is not None: - lc = k.lower() - else: - lc = None - # don't lose case distinction for unknown fields - if (lc in value_attrs) or (lc in boolean_attrs): - k = lc - if k in boolean_attrs: - if v is None: v = True - standard[k] = v - elif k in value_attrs: - standard[k] = v - else: - rest[k] = v - - h = standard.get - expires = h("expires") - discard = h("discard") - if expires is not None: - expires = iso2time(expires) - if expires is None: - discard = True - domain = h("domain") - domain_specified = domain.startswith(".") - c = Cookie(h("version"), name, value, - h("port"), h("port_spec"), - domain, domain_specified, h("domain_dot"), - h("path"), h("path_spec"), - h("secure"), - expires, - discard, - h("comment"), - h("commenturl"), - rest, - h("rfc2109"), - ) - if not ignore_discard and c.discard: - continue - if not ignore_expires and c.is_expired(now): - continue - self.set_cookie(c) - except: - reraise_unmasked_exceptions((IOError,)) - raise LoadError("invalid Set-Cookie3 format file %s" % filename) - diff --git a/plugin.video.alfa/lib/mechanize/_markupbase.py b/plugin.video.alfa/lib/mechanize/_markupbase.py deleted file mode 100755 index 1b4c9a77..00000000 --- a/plugin.video.alfa/lib/mechanize/_markupbase.py +++ /dev/null @@ -1,393 +0,0 @@ -# Taken from Python 2.6.4 for use by _sgmllib.py -"""Shared support for scanning document type declarations in HTML and XHTML. - -This module is used as a foundation for the HTMLParser and sgmllib -modules (indirectly, for htmllib as well). It has no documented -public API and should not be used directly. - -""" - -import re - -_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match -_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match -_commentclose = re.compile(r'--\s*>') -_markedsectionclose = re.compile(r']\s*]\s*>') - -# An analysis of the MS-Word extensions is available at -# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf - -_msmarkedsectionclose = re.compile(r']\s*>') - -del re - - -class ParserBase: - """Parser base class which provides some common support methods used - by the SGML/HTML and XHTML parsers.""" - - def __init__(self): - if self.__class__ is ParserBase: - raise RuntimeError( - "markupbase.ParserBase must be subclassed") - - def error(self, message): - raise NotImplementedError( - "subclasses of ParserBase must override error()") - - def reset(self): - self.lineno = 1 - self.offset = 0 - - def getpos(self): - """Return current line number and offset.""" - return self.lineno, self.offset - - # Internal -- update line number and offset. This should be - # called for each piece of data exactly once, in order -- in other - # words the concatenation of all the input strings to this - # function should be exactly the entire input. - def updatepos(self, i, j): - if i >= j: - return j - rawdata = self.rawdata - nlines = rawdata.count("\n", i, j) - if nlines: - self.lineno = self.lineno + nlines - pos = rawdata.rindex("\n", i, j) # Should not fail - self.offset = j-(pos+1) - else: - self.offset = self.offset + j-i - return j - - _decl_otherchars = '' - - # Internal -- parse declaration (for use by subclasses). - def parse_declaration(self, i): - # This is some sort of declaration; in "HTML as - # deployed," this should only be the document type - # declaration ("<!DOCTYPE html...>"). - # ISO 8879:1986, however, has more complex - # declaration syntax for elements in <!...>, including: - # --comment-- - # [marked section] - # name in the following list: ENTITY, DOCTYPE, ELEMENT, - # ATTLIST, NOTATION, SHORTREF, USEMAP, - # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM - rawdata = self.rawdata - j = i + 2 - assert rawdata[i:j] == "<!", "unexpected call to parse_declaration" - if rawdata[j:j+1] == ">": - # the empty comment <!> - return j + 1 - if rawdata[j:j+1] in ("-", ""): - # Start of comment followed by buffer boundary, - # or just a buffer boundary. - return -1 - # A simple, practical version could look like: ((name|stringlit) S*) + '>' - n = len(rawdata) - if rawdata[j:j+2] == '--': #comment - # Locate --.*-- as the body of the comment - return self.parse_comment(i) - elif rawdata[j] == '[': #marked section - # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section - # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA - # Note that this is extended by Microsoft Office "Save as Web" function - # to include [if...] and [endif]. - return self.parse_marked_section(i) - else: #all other declaration elements - decltype, j = self._scan_name(j, i) - if j < 0: - return j - if decltype == "doctype": - self._decl_otherchars = '' - while j < n: - c = rawdata[j] - if c == ">": - # end of declaration syntax - data = rawdata[i+2:j] - if decltype == "doctype": - self.handle_decl(data) - else: - self.unknown_decl(data) - return j + 1 - if c in "\"'": - m = _declstringlit_match(rawdata, j) - if not m: - return -1 # incomplete - j = m.end() - elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": - name, j = self._scan_name(j, i) - elif c in self._decl_otherchars: - j = j + 1 - elif c == "[": - # this could be handled in a separate doctype parser - if decltype == "doctype": - j = self._parse_doctype_subset(j + 1, i) - elif decltype in ("attlist", "linktype", "link", "element"): - # must tolerate []'d groups in a content model in an element declaration - # also in data attribute specifications of attlist declaration - # also link type declaration subsets in linktype declarations - # also link attribute specification lists in link declarations - self.error("unsupported '[' char in %s declaration" % decltype) - else: - self.error("unexpected '[' char in declaration") - else: - self.error( - "unexpected %r char in declaration" % rawdata[j]) - if j < 0: - return j - return -1 # incomplete - - # Internal -- parse a marked section - # Override this to handle MS-word extension syntax <![if word]>content<![endif]> - def parse_marked_section(self, i, report=1): - rawdata= self.rawdata - assert rawdata[i:i+3] == '<![', "unexpected call to parse_marked_section()" - sectName, j = self._scan_name( i+3, i ) - if j < 0: - return j - if sectName in ("temp", "cdata", "ignore", "include", "rcdata"): - # look for standard ]]> ending - match= _markedsectionclose.search(rawdata, i+3) - elif sectName in ("if", "else", "endif"): - # look for MS Office ]> ending - match= _msmarkedsectionclose.search(rawdata, i+3) - else: - self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) - if not match: - return -1 - if report: - j = match.start(0) - self.unknown_decl(rawdata[i+3: j]) - return match.end(0) - - # Internal -- parse comment, return length or -1 if not terminated - def parse_comment(self, i, report=1): - rawdata = self.rawdata - if rawdata[i:i+4] != '<!--': - self.error('unexpected call to parse_comment()') - match = _commentclose.search(rawdata, i+4) - if not match: - return -1 - if report: - j = match.start(0) - self.handle_comment(rawdata[i+4: j]) - return match.end(0) - - # Internal -- scan past the internal subset in a <!DOCTYPE declaration, - # returning the index just past any whitespace following the trailing ']'. - def _parse_doctype_subset(self, i, declstartpos): - rawdata = self.rawdata - n = len(rawdata) - j = i - while j < n: - c = rawdata[j] - if c == "<": - s = rawdata[j:j+2] - if s == "<": - # end of buffer; incomplete - return -1 - if s != "<!": - self.updatepos(declstartpos, j + 1) - self.error("unexpected char in internal subset (in %r)" % s) - if (j + 2) == n: - # end of buffer; incomplete - return -1 - if (j + 4) > n: - # end of buffer; incomplete - return -1 - if rawdata[j:j+4] == "<!--": - j = self.parse_comment(j, report=0) - if j < 0: - return j - continue - name, j = self._scan_name(j + 2, declstartpos) - if j == -1: - return -1 - if name not in ("attlist", "element", "entity", "notation"): - self.updatepos(declstartpos, j + 2) - self.error( - "unknown declaration %r in internal subset" % name) - # handle the individual names - meth = getattr(self, "_parse_doctype_" + name) - j = meth(j, declstartpos) - if j < 0: - return j - elif c == "%": - # parameter entity reference - if (j + 1) == n: - # end of buffer; incomplete - return -1 - s, j = self._scan_name(j + 1, declstartpos) - if j < 0: - return j - if rawdata[j] == ";": - j = j + 1 - elif c == "]": - j = j + 1 - while j < n and rawdata[j].isspace(): - j = j + 1 - if j < n: - if rawdata[j] == ">": - return j - self.updatepos(declstartpos, j) - self.error("unexpected char after internal subset") - else: - return -1 - elif c.isspace(): - j = j + 1 - else: - self.updatepos(declstartpos, j) - self.error("unexpected char %r in internal subset" % c) - # end of buffer reached - return -1 - - # Internal -- scan past <!ELEMENT declarations - def _parse_doctype_element(self, i, declstartpos): - name, j = self._scan_name(i, declstartpos) - if j == -1: - return -1 - # style content model; just skip until '>' - rawdata = self.rawdata - if '>' in rawdata[j:]: - return rawdata.find(">", j) + 1 - return -1 - - # Internal -- scan past <!ATTLIST declarations - def _parse_doctype_attlist(self, i, declstartpos): - rawdata = self.rawdata - name, j = self._scan_name(i, declstartpos) - c = rawdata[j:j+1] - if c == "": - return -1 - if c == ">": - return j + 1 - while 1: - # scan a series of attribute descriptions; simplified: - # name type [value] [#constraint] - name, j = self._scan_name(j, declstartpos) - if j < 0: - return j - c = rawdata[j:j+1] - if c == "": - return -1 - if c == "(": - # an enumerated type; look for ')' - if ")" in rawdata[j:]: - j = rawdata.find(")", j) + 1 - else: - return -1 - while rawdata[j:j+1].isspace(): - j = j + 1 - if not rawdata[j:]: - # end of buffer, incomplete - return -1 - else: - name, j = self._scan_name(j, declstartpos) - c = rawdata[j:j+1] - if not c: - return -1 - if c in "'\"": - m = _declstringlit_match(rawdata, j) - if m: - j = m.end() - else: - return -1 - c = rawdata[j:j+1] - if not c: - return -1 - if c == "#": - if rawdata[j:] == "#": - # end of buffer - return -1 - name, j = self._scan_name(j + 1, declstartpos) - if j < 0: - return j - c = rawdata[j:j+1] - if not c: - return -1 - if c == '>': - # all done - return j + 1 - - # Internal -- scan past <!NOTATION declarations - def _parse_doctype_notation(self, i, declstartpos): - name, j = self._scan_name(i, declstartpos) - if j < 0: - return j - rawdata = self.rawdata - while 1: - c = rawdata[j:j+1] - if not c: - # end of buffer; incomplete - return -1 - if c == '>': - return j + 1 - if c in "'\"": - m = _declstringlit_match(rawdata, j) - if not m: - return -1 - j = m.end() - else: - name, j = self._scan_name(j, declstartpos) - if j < 0: - return j - - # Internal -- scan past <!ENTITY declarations - def _parse_doctype_entity(self, i, declstartpos): - rawdata = self.rawdata - if rawdata[i:i+1] == "%": - j = i + 1 - while 1: - c = rawdata[j:j+1] - if not c: - return -1 - if c.isspace(): - j = j + 1 - else: - break - else: - j = i - name, j = self._scan_name(j, declstartpos) - if j < 0: - return j - while 1: - c = self.rawdata[j:j+1] - if not c: - return -1 - if c in "'\"": - m = _declstringlit_match(rawdata, j) - if m: - j = m.end() - else: - return -1 # incomplete - elif c == ">": - return j + 1 - else: - name, j = self._scan_name(j, declstartpos) - if j < 0: - return j - - # Internal -- scan a name token and the new position and the token, or - # return -1 if we've reached the end of the buffer. - def _scan_name(self, i, declstartpos): - rawdata = self.rawdata - n = len(rawdata) - if i == n: - return None, -1 - m = _declname_match(rawdata, i) - if m: - s = m.group() - name = s.strip() - if (i + len(s)) == n: - return None, -1 # end of buffer - return name.lower(), m.end() - else: - self.updatepos(declstartpos, i) - self.error("expected name token at %r" - % rawdata[declstartpos:declstartpos+20]) - - # To be overridden -- handlers for unknown objects - def unknown_decl(self, data): - pass diff --git a/plugin.video.alfa/lib/mechanize/_mechanize.py b/plugin.video.alfa/lib/mechanize/_mechanize.py deleted file mode 100755 index 69dc1a8a..00000000 --- a/plugin.video.alfa/lib/mechanize/_mechanize.py +++ /dev/null @@ -1,669 +0,0 @@ -"""Stateful programmatic WWW navigation, after Perl's WWW::Mechanize. - -Copyright 2003-2006 John J. Lee <jjl@pobox.com> -Copyright 2003 Andy Lester (original Perl code) - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file COPYING.txt -included with the distribution). - -""" - -import copy, re, os, urllib, urllib2 - -from _html import DefaultFactory -import _response -import _request -import _rfc3986 -import _sockettimeout -import _urllib2_fork -from _useragent import UserAgentBase - -class BrowserStateError(Exception): pass -class LinkNotFoundError(Exception): pass -class FormNotFoundError(Exception): pass - - -def sanepathname2url(path): - urlpath = urllib.pathname2url(path) - if os.name == "nt" and urlpath.startswith("///"): - urlpath = urlpath[2:] - # XXX don't ask me about the mac... - return urlpath - - -class History: - """ - - Though this will become public, the implied interface is not yet stable. - - """ - def __init__(self): - self._history = [] # LIFO - def add(self, request, response): - self._history.append((request, response)) - def back(self, n, _response): - response = _response # XXX move Browser._response into this class? - while n > 0 or response is None: - try: - request, response = self._history.pop() - except IndexError: - raise BrowserStateError("already at start of history") - n -= 1 - return request, response - def clear(self): - del self._history[:] - def close(self): - for request, response in self._history: - if response is not None: - response.close() - del self._history[:] - - -class HTTPRefererProcessor(_urllib2_fork.BaseHandler): - def http_request(self, request): - # See RFC 2616 14.36. The only times we know the source of the - # request URI has a URI associated with it are redirect, and - # Browser.click() / Browser.submit() / Browser.follow_link(). - # Otherwise, it's the user's job to add any Referer header before - # .open()ing. - if hasattr(request, "redirect_dict"): - request = self.parent._add_referer_header( - request, origin_request=False) - return request - - https_request = http_request - - -class Browser(UserAgentBase): - """Browser-like class with support for history, forms and links. - - BrowserStateError is raised whenever the browser is in the wrong state to - complete the requested operation - e.g., when .back() is called when the - browser history is empty, or when .follow_link() is called when the current - response does not contain HTML data. - - Public attributes: - - request: current request (mechanize.Request) - form: currently selected form (see .select_form()) - - """ - - handler_classes = copy.copy(UserAgentBase.handler_classes) - handler_classes["_referer"] = HTTPRefererProcessor - default_features = copy.copy(UserAgentBase.default_features) - default_features.append("_referer") - - def __init__(self, - factory=None, - history=None, - request_class=None, - ): - """ - - Only named arguments should be passed to this constructor. - - factory: object implementing the mechanize.Factory interface. - history: object implementing the mechanize.History interface. Note - this interface is still experimental and may change in future. - request_class: Request class to use. Defaults to mechanize.Request - - The Factory and History objects passed in are 'owned' by the Browser, - so they should not be shared across Browsers. In particular, - factory.set_response() should not be called except by the owning - Browser itself. - - Note that the supplied factory's request_class is overridden by this - constructor, to ensure only one Request class is used. - - """ - self._handle_referer = True - - if history is None: - history = History() - self._history = history - - if request_class is None: - request_class = _request.Request - - if factory is None: - factory = DefaultFactory() - factory.set_request_class(request_class) - self._factory = factory - self.request_class = request_class - - self.request = None - self._set_response(None, False) - - # do this last to avoid __getattr__ problems - UserAgentBase.__init__(self) - - def close(self): - UserAgentBase.close(self) - if self._response is not None: - self._response.close() - if self._history is not None: - self._history.close() - self._history = None - - # make use after .close easy to spot - self.form = None - self.request = self._response = None - self.request = self.response = self.set_response = None - self.geturl = self.reload = self.back = None - self.clear_history = self.set_cookie = self.links = self.forms = None - self.viewing_html = self.encoding = self.title = None - self.select_form = self.click = self.submit = self.click_link = None - self.follow_link = self.find_link = None - - def set_handle_referer(self, handle): - """Set whether to add Referer header to each request.""" - self._set_handler("_referer", handle) - self._handle_referer = bool(handle) - - def _add_referer_header(self, request, origin_request=True): - if self.request is None: - return request - scheme = request.get_type() - original_scheme = self.request.get_type() - if scheme not in ["http", "https"]: - return request - if not origin_request and not self.request.has_header("Referer"): - return request - - if (self._handle_referer and - original_scheme in ["http", "https"] and - not (original_scheme == "https" and scheme != "https")): - # strip URL fragment (RFC 2616 14.36) - parts = _rfc3986.urlsplit(self.request.get_full_url()) - parts = parts[:-1]+(None,) - referer = _rfc3986.urlunsplit(parts) - request.add_unredirected_header("Referer", referer) - return request - - def open_novisit(self, url, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - """Open a URL without visiting it. - - Browser state (including request, response, history, forms and links) - is left unchanged by calling this function. - - The interface is the same as for .open(). - - This is useful for things like fetching images. - - See also .retrieve(). - - """ - return self._mech_open(url, data, visit=False, timeout=timeout) - - def open(self, url, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - return self._mech_open(url, data, timeout=timeout) - - def _mech_open(self, url, data=None, update_history=True, visit=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - try: - url.get_full_url - except AttributeError: - # string URL -- convert to absolute URL if required - scheme, authority = _rfc3986.urlsplit(url)[:2] - if scheme is None: - # relative URL - if self._response is None: - raise BrowserStateError( - "can't fetch relative reference: " - "not viewing any document") - url = _rfc3986.urljoin(self._response.geturl(), url) - - request = self._request(url, data, visit, timeout) - visit = request.visit - if visit is None: - visit = True - - if visit: - self._visit_request(request, update_history) - - success = True - try: - response = UserAgentBase.open(self, request, data) - except urllib2.HTTPError, error: - success = False - if error.fp is None: # not a response - raise - response = error -## except (IOError, socket.error, OSError), error: -## # Yes, urllib2 really does raise all these :-(( -## # See test_urllib2.py for examples of socket.gaierror and OSError, -## # plus note that FTPHandler raises IOError. -## # XXX I don't seem to have an example of exactly socket.error being -## # raised, only socket.gaierror... -## # I don't want to start fixing these here, though, since this is a -## # subclass of OpenerDirector, and it would break old code. Even in -## # Python core, a fix would need some backwards-compat. hack to be -## # acceptable. -## raise - - if visit: - self._set_response(response, False) - response = copy.copy(self._response) - elif response is not None: - response = _response.upgrade_response(response) - - if not success: - raise response - return response - - def __str__(self): - text = [] - text.append("<%s " % self.__class__.__name__) - if self._response: - text.append("visiting %s" % self._response.geturl()) - else: - text.append("(not visiting a URL)") - if self.form: - text.append("\n selected form:\n %s\n" % str(self.form)) - text.append(">") - return "".join(text) - - def response(self): - """Return a copy of the current response. - - The returned object has the same interface as the object returned by - .open() (or mechanize.urlopen()). - - """ - return copy.copy(self._response) - - def open_local_file(self, filename): - path = sanepathname2url(os.path.abspath(filename)) - url = 'file://'+path - return self.open(url) - - def set_response(self, response): - """Replace current response with (a copy of) response. - - response may be None. - - This is intended mostly for HTML-preprocessing. - """ - self._set_response(response, True) - - def _set_response(self, response, close_current): - # sanity check, necessary but far from sufficient - if not (response is None or - (hasattr(response, "info") and hasattr(response, "geturl") and - hasattr(response, "read") - ) - ): - raise ValueError("not a response object") - - self.form = None - if response is not None: - response = _response.upgrade_response(response) - if close_current and self._response is not None: - self._response.close() - self._response = response - self._factory.set_response(response) - - def visit_response(self, response, request=None): - """Visit the response, as if it had been .open()ed. - - Unlike .set_response(), this updates history rather than replacing the - current response. - """ - if request is None: - request = _request.Request(response.geturl()) - self._visit_request(request, True) - self._set_response(response, False) - - def _visit_request(self, request, update_history): - if self._response is not None: - self._response.close() - if self.request is not None and update_history: - self._history.add(self.request, self._response) - self._response = None - # we want self.request to be assigned even if UserAgentBase.open - # fails - self.request = request - - def geturl(self): - """Get URL of current document.""" - if self._response is None: - raise BrowserStateError("not viewing any document") - return self._response.geturl() - - def reload(self): - """Reload current document, and return response object.""" - if self.request is None: - raise BrowserStateError("no URL has yet been .open()ed") - if self._response is not None: - self._response.close() - return self._mech_open(self.request, update_history=False) - - def back(self, n=1): - """Go back n steps in history, and return response object. - - n: go back this number of steps (default 1 step) - - """ - if self._response is not None: - self._response.close() - self.request, response = self._history.back(n, self._response) - self.set_response(response) - if not response.read_complete: - return self.reload() - return copy.copy(response) - - def clear_history(self): - self._history.clear() - - def set_cookie(self, cookie_string): - """Request to set a cookie. - - Note that it is NOT necessary to call this method under ordinary - circumstances: cookie handling is normally entirely automatic. The - intended use case is rather to simulate the setting of a cookie by - client script in a web page (e.g. JavaScript). In that case, use of - this method is necessary because mechanize currently does not support - JavaScript, VBScript, etc. - - The cookie is added in the same way as if it had arrived with the - current response, as a result of the current request. This means that, - for example, if it is not appropriate to set the cookie based on the - current request, no cookie will be set. - - The cookie will be returned automatically with subsequent responses - made by the Browser instance whenever that's appropriate. - - cookie_string should be a valid value of the Set-Cookie header. - - For example: - - browser.set_cookie( - "sid=abcdef; expires=Wednesday, 09-Nov-06 23:12:40 GMT") - - Currently, this method does not allow for adding RFC 2986 cookies. - This limitation will be lifted if anybody requests it. - - """ - if self._response is None: - raise BrowserStateError("not viewing any document") - if self.request.get_type() not in ["http", "https"]: - raise BrowserStateError("can't set cookie for non-HTTP/HTTPS " - "transactions") - cookiejar = self._ua_handlers["_cookies"].cookiejar - response = self.response() # copy - headers = response.info() - headers["Set-cookie"] = cookie_string - cookiejar.extract_cookies(response, self.request) - - def links(self, **kwds): - """Return iterable over links (mechanize.Link objects).""" - if not self.viewing_html(): - raise BrowserStateError("not viewing HTML") - links = self._factory.links() - if kwds: - return self._filter_links(links, **kwds) - else: - return links - - def forms(self): - """Return iterable over forms. - - The returned form objects implement the mechanize.HTMLForm interface. - - """ - if not self.viewing_html(): - raise BrowserStateError("not viewing HTML") - return self._factory.forms() - - def global_form(self): - """Return the global form object, or None if the factory implementation - did not supply one. - - The "global" form object contains all controls that are not descendants - of any FORM element. - - The returned form object implements the mechanize.HTMLForm interface. - - This is a separate method since the global form is not regarded as part - of the sequence of forms in the document -- mostly for - backwards-compatibility. - - """ - if not self.viewing_html(): - raise BrowserStateError("not viewing HTML") - return self._factory.global_form - - def viewing_html(self): - """Return whether the current response contains HTML data.""" - if self._response is None: - raise BrowserStateError("not viewing any document") - return self._factory.is_html - - def encoding(self): - if self._response is None: - raise BrowserStateError("not viewing any document") - return self._factory.encoding - - def title(self): - r"""Return title, or None if there is no title element in the document. - - Treatment of any tag children of attempts to follow Firefox and IE - (currently, tags are preserved). - - """ - if not self.viewing_html(): - raise BrowserStateError("not viewing HTML") - return self._factory.title - - def select_form(self, name=None, predicate=None, nr=None): - """Select an HTML form for input. - - This is a bit like giving a form the "input focus" in a browser. - - If a form is selected, the Browser object supports the HTMLForm - interface, so you can call methods like .set_value(), .set(), and - .click(). - - Another way to select a form is to assign to the .form attribute. The - form assigned should be one of the objects returned by the .forms() - method. - - At least one of the name, predicate and nr arguments must be supplied. - If no matching form is found, mechanize.FormNotFoundError is raised. - - If name is specified, then the form must have the indicated name. - - If predicate is specified, then the form must match that function. The - predicate function is passed the HTMLForm as its single argument, and - should return a boolean value indicating whether the form matched. - - nr, if supplied, is the sequence number of the form (where 0 is the - first). Note that control 0 is the first form matching all the other - arguments (if supplied); it is not necessarily the first control in the - form. The "global form" (consisting of all form controls not contained - in any FORM element) is considered not to be part of this sequence and - to have no name, so will not be matched unless both name and nr are - None. - - """ - if not self.viewing_html(): - raise BrowserStateError("not viewing HTML") - if (name is None) and (predicate is None) and (nr is None): - raise ValueError( - "at least one argument must be supplied to specify form") - - global_form = self._factory.global_form - if nr is None and name is None and \ - predicate is not None and predicate(global_form): - self.form = global_form - return - - orig_nr = nr - for form in self.forms(): - if name is not None and name != form.name: - continue - if predicate is not None and not predicate(form): - continue - if nr: - nr -= 1 - continue - self.form = form - break # success - else: - # failure - description = [] - if name is not None: description.append("name '%s'" % name) - if predicate is not None: - description.append("predicate %s" % predicate) - if orig_nr is not None: description.append("nr %d" % orig_nr) - description = ", ".join(description) - raise FormNotFoundError("no form matching "+description) - - def click(self, *args, **kwds): - """See mechanize.HTMLForm.click for documentation.""" - if not self.viewing_html(): - raise BrowserStateError("not viewing HTML") - request = self.form.click(*args, **kwds) - return self._add_referer_header(request) - - def submit(self, *args, **kwds): - """Submit current form. - - Arguments are as for mechanize.HTMLForm.click(). - - Return value is same as for Browser.open(). - - """ - return self.open(self.click(*args, **kwds)) - - def click_link(self, link=None, **kwds): - """Find a link and return a Request object for it. - - Arguments are as for .find_link(), except that a link may be supplied - as the first argument. - - """ - if not self.viewing_html(): - raise BrowserStateError("not viewing HTML") - if not link: - link = self.find_link(**kwds) - else: - if kwds: - raise ValueError( - "either pass a Link, or keyword arguments, not both") - request = self.request_class(link.absolute_url) - return self._add_referer_header(request) - - def follow_link(self, link=None, **kwds): - """Find a link and .open() it. - - Arguments are as for .click_link(). - - Return value is same as for Browser.open(). - - """ - return self.open(self.click_link(link, **kwds)) - - def find_link(self, **kwds): - """Find a link in current page. - - Links are returned as mechanize.Link objects. - - # Return third link that .search()-matches the regexp "python" - # (by ".search()-matches", I mean that the regular expression method - # .search() is used, rather than .match()). - find_link(text_regex=re.compile("python"), nr=2) - - # Return first http link in the current page that points to somewhere - # on python.org whose link text (after tags have been removed) is - # exactly "monty python". - find_link(text="monty python", - url_regex=re.compile("http.*python.org")) - - # Return first link with exactly three HTML attributes. - find_link(predicate=lambda link: len(link.attrs) == 3) - - Links include anchors (<a>), image maps (<area>), and frames (<frame>, - <iframe>). - - All arguments must be passed by keyword, not position. Zero or more - arguments may be supplied. In order to find a link, all arguments - supplied must match. - - If a matching link is not found, mechanize.LinkNotFoundError is raised. - - text: link text between link tags: e.g. <a href="blah">this bit</a> (as - returned by pullparser.get_compressed_text(), ie. without tags but - with opening tags "textified" as per the pullparser docs) must compare - equal to this argument, if supplied - text_regex: link text between tag (as defined above) must match the - regular expression object or regular expression string passed as this - argument, if supplied - name, name_regex: as for text and text_regex, but matched against the - name HTML attribute of the link tag - url, url_regex: as for text and text_regex, but matched against the - URL of the link tag (note this matches against Link.url, which is a - relative or absolute URL according to how it was written in the HTML) - tag: element name of opening tag, e.g. "a" - predicate: a function taking a Link object as its single argument, - returning a boolean result, indicating whether the links - nr: matches the nth link that matches all other criteria (default 0) - - """ - try: - return self._filter_links(self._factory.links(), **kwds).next() - except StopIteration: - raise LinkNotFoundError() - - def __getattr__(self, name): - # pass through _form.HTMLForm methods and attributes - form = self.__dict__.get("form") - if form is None: - raise AttributeError( - "%s instance has no attribute %s (perhaps you forgot to " - ".select_form()?)" % (self.__class__, name)) - return getattr(form, name) - - def _filter_links(self, links, - text=None, text_regex=None, - name=None, name_regex=None, - url=None, url_regex=None, - tag=None, - predicate=None, - nr=0 - ): - if not self.viewing_html(): - raise BrowserStateError("not viewing HTML") - - orig_nr = nr - - for link in links: - if url is not None and url != link.url: - continue - if url_regex is not None and not re.search(url_regex, link.url): - continue - if (text is not None and - (link.text is None or text != link.text)): - continue - if (text_regex is not None and - (link.text is None or not re.search(text_regex, link.text))): - continue - if name is not None and name != dict(link.attrs).get("name"): - continue - if name_regex is not None: - link_name = dict(link.attrs).get("name") - if link_name is None or not re.search(name_regex, link_name): - continue - if tag is not None and tag != link.tag: - continue - if predicate is not None and not predicate(link): - continue - if nr: - nr -= 1 - continue - yield link - nr = orig_nr diff --git a/plugin.video.alfa/lib/mechanize/_mozillacookiejar.py b/plugin.video.alfa/lib/mechanize/_mozillacookiejar.py deleted file mode 100755 index 21296ed5..00000000 --- a/plugin.video.alfa/lib/mechanize/_mozillacookiejar.py +++ /dev/null @@ -1,161 +0,0 @@ -"""Mozilla / Netscape cookie loading / saving. - -Copyright 2002-2006 John J Lee <jjl@pobox.com> -Copyright 1997-1999 Gisle Aas (original libwww-perl code) - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -import re, time, logging - -from _clientcookie import reraise_unmasked_exceptions, FileCookieJar, Cookie, \ - MISSING_FILENAME_TEXT, LoadError -debug = logging.getLogger("ClientCookie").debug - - -class MozillaCookieJar(FileCookieJar): - """ - - WARNING: you may want to backup your browser's cookies file if you use - this class to save cookies. I *think* it works, but there have been - bugs in the past! - - This class differs from CookieJar only in the format it uses to save and - load cookies to and from a file. This class uses the Mozilla/Netscape - `cookies.txt' format. lynx uses this file format, too. - - Don't expect cookies saved while the browser is running to be noticed by - the browser (in fact, Mozilla on unix will overwrite your saved cookies if - you change them on disk while it's running; on Windows, you probably can't - save at all while the browser is running). - - Note that the Mozilla/Netscape format will downgrade RFC2965 cookies to - Netscape cookies on saving. - - In particular, the cookie version and port number information is lost, - together with information about whether or not Path, Port and Discard were - specified by the Set-Cookie2 (or Set-Cookie) header, and whether or not the - domain as set in the HTTP header started with a dot (yes, I'm aware some - domains in Netscape files start with a dot and some don't -- trust me, you - really don't want to know any more about this). - - Note that though Mozilla and Netscape use the same format, they use - slightly different headers. The class saves cookies using the Netscape - header by default (Mozilla can cope with that). - - """ - magic_re = "#( Netscape)? HTTP Cookie File" - header = """\ - # Netscape HTTP Cookie File - # http://www.netscape.com/newsref/std/cookie_spec.html - # This is a generated file! Do not edit. - -""" - - def _really_load(self, f, filename, ignore_discard, ignore_expires): - now = time.time() - - magic = f.readline() - if not re.search(self.magic_re, magic): - f.close() - raise LoadError( - "%s does not look like a Netscape format cookies file" % - filename) - - try: - while 1: - line = f.readline() - if line == "": break - - # last field may be absent, so keep any trailing tab - if line.endswith("\n"): line = line[:-1] - - # skip comments and blank lines XXX what is $ for? - if (line.strip().startswith("#") or - line.strip().startswith("$") or - line.strip() == ""): - continue - - domain, domain_specified, path, secure, expires, name, value = \ - line.split("\t", 6) - secure = (secure == "TRUE") - domain_specified = (domain_specified == "TRUE") - if name == "": - name = value - value = None - - initial_dot = domain.startswith(".") - if domain_specified != initial_dot: - raise LoadError("domain and domain specified flag don't " - "match in %s: %s" % (filename, line)) - - discard = False - if expires == "": - expires = None - discard = True - - # assume path_specified is false - c = Cookie(0, name, value, - None, False, - domain, domain_specified, initial_dot, - path, False, - secure, - expires, - discard, - None, - None, - {}) - if not ignore_discard and c.discard: - continue - if not ignore_expires and c.is_expired(now): - continue - self.set_cookie(c) - - except: - reraise_unmasked_exceptions((IOError, LoadError)) - raise LoadError("invalid Netscape format file %s: %s" % - (filename, line)) - - def save(self, filename=None, ignore_discard=False, ignore_expires=False): - if filename is None: - if self.filename is not None: filename = self.filename - else: raise ValueError(MISSING_FILENAME_TEXT) - - f = open(filename, "w") - try: - debug("Saving Netscape cookies.txt file") - f.write(self.header) - now = time.time() - for cookie in self: - if not ignore_discard and cookie.discard: - debug(" Not saving %s: marked for discard", cookie.name) - continue - if not ignore_expires and cookie.is_expired(now): - debug(" Not saving %s: expired", cookie.name) - continue - if cookie.secure: secure = "TRUE" - else: secure = "FALSE" - if cookie.domain.startswith("."): initial_dot = "TRUE" - else: initial_dot = "FALSE" - if cookie.expires is not None: - expires = str(cookie.expires) - else: - expires = "" - if cookie.value is None: - # cookies.txt regards 'Set-Cookie: foo' as a cookie - # with no name, whereas cookielib regards it as a - # cookie with no value. - name = "" - value = cookie.name - else: - name = cookie.name - value = cookie.value - f.write( - "\t".join([cookie.domain, initial_dot, cookie.path, - secure, expires, name, value])+ - "\n") - finally: - f.close() diff --git a/plugin.video.alfa/lib/mechanize/_msiecookiejar.py b/plugin.video.alfa/lib/mechanize/_msiecookiejar.py deleted file mode 100755 index 234d30be..00000000 --- a/plugin.video.alfa/lib/mechanize/_msiecookiejar.py +++ /dev/null @@ -1,388 +0,0 @@ -"""Microsoft Internet Explorer cookie loading on Windows. - -Copyright 2002-2003 Johnny Lee <typo_pl@hotmail.com> (MSIE Perl code) -Copyright 2002-2006 John J Lee <jjl@pobox.com> (The Python port) - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -# XXX names and comments are not great here - -import os, re, time, struct, logging -if os.name == "nt": - import _winreg - -from _clientcookie import FileCookieJar, CookieJar, Cookie, \ - MISSING_FILENAME_TEXT, LoadError - -debug = logging.getLogger("mechanize").debug - - -def regload(path, leaf): - key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, path, 0, - _winreg.KEY_ALL_ACCESS) - try: - value = _winreg.QueryValueEx(key, leaf)[0] - except WindowsError: - value = None - return value - -WIN32_EPOCH = 0x019db1ded53e8000L # 1970 Jan 01 00:00:00 in Win32 FILETIME - -def epoch_time_offset_from_win32_filetime(filetime): - """Convert from win32 filetime to seconds-since-epoch value. - - MSIE stores create and expire times as Win32 FILETIME, which is 64 - bits of 100 nanosecond intervals since Jan 01 1601. - - mechanize expects time in 32-bit value expressed in seconds since the - epoch (Jan 01 1970). - - """ - if filetime < WIN32_EPOCH: - raise ValueError("filetime (%d) is before epoch (%d)" % - (filetime, WIN32_EPOCH)) - - return divmod((filetime - WIN32_EPOCH), 10000000L)[0] - -def binary_to_char(c): return "%02X" % ord(c) -def binary_to_str(d): return "".join(map(binary_to_char, list(d))) - -class MSIEBase: - magic_re = re.compile(r"Client UrlCache MMF Ver \d\.\d.*") - padding = "\x0d\xf0\xad\x0b" - - msie_domain_re = re.compile(r"^([^/]+)(/.*)$") - cookie_re = re.compile("Cookie\:.+\@([\x21-\xFF]+).*?" - "(.+\@[\x21-\xFF]+\.txt)") - - # path under HKEY_CURRENT_USER from which to get location of index.dat - reg_path = r"software\microsoft\windows" \ - r"\currentversion\explorer\shell folders" - reg_key = "Cookies" - - def __init__(self): - self._delayload_domains = {} - - def _delayload_domain(self, domain): - # if necessary, lazily load cookies for this domain - delayload_info = self._delayload_domains.get(domain) - if delayload_info is not None: - cookie_file, ignore_discard, ignore_expires = delayload_info - try: - self.load_cookie_data(cookie_file, - ignore_discard, ignore_expires) - except (LoadError, IOError): - debug("error reading cookie file, skipping: %s", cookie_file) - else: - del self._delayload_domains[domain] - - def _load_cookies_from_file(self, filename): - debug("Loading MSIE cookies file: %s", filename) - cookies = [] - - cookies_fh = open(filename) - - try: - while 1: - key = cookies_fh.readline() - if key == "": break - - rl = cookies_fh.readline - def getlong(rl=rl): return long(rl().rstrip()) - def getstr(rl=rl): return rl().rstrip() - - key = key.rstrip() - value = getstr() - domain_path = getstr() - flags = getlong() # 0x2000 bit is for secure I think - lo_expire = getlong() - hi_expire = getlong() - lo_create = getlong() - hi_create = getlong() - sep = getstr() - - if "" in (key, value, domain_path, flags, hi_expire, lo_expire, - hi_create, lo_create, sep) or (sep != "*"): - break - - m = self.msie_domain_re.search(domain_path) - if m: - domain = m.group(1) - path = m.group(2) - - cookies.append({"KEY": key, "VALUE": value, - "DOMAIN": domain, "PATH": path, - "FLAGS": flags, "HIXP": hi_expire, - "LOXP": lo_expire, "HICREATE": hi_create, - "LOCREATE": lo_create}) - finally: - cookies_fh.close() - - return cookies - - def load_cookie_data(self, filename, - ignore_discard=False, ignore_expires=False): - """Load cookies from file containing actual cookie data. - - Old cookies are kept unless overwritten by newly loaded ones. - - You should not call this method if the delayload attribute is set. - - I think each of these files contain all cookies for one user, domain, - and path. - - filename: file containing cookies -- usually found in a file like - C:\WINNT\Profiles\joe\Cookies\joe@blah[1].txt - - """ - now = int(time.time()) - - cookie_data = self._load_cookies_from_file(filename) - - for cookie in cookie_data: - flags = cookie["FLAGS"] - secure = ((flags & 0x2000) != 0) - filetime = (cookie["HIXP"] << 32) + cookie["LOXP"] - expires = epoch_time_offset_from_win32_filetime(filetime) - if expires < now: - discard = True - else: - discard = False - domain = cookie["DOMAIN"] - initial_dot = domain.startswith(".") - if initial_dot: - domain_specified = True - else: - # MSIE 5 does not record whether the domain cookie-attribute - # was specified. - # Assuming it wasn't is conservative, because with strict - # domain matching this will match less frequently; with regular - # Netscape tail-matching, this will match at exactly the same - # times that domain_specified = True would. It also means we - # don't have to prepend a dot to achieve consistency with our - # own & Mozilla's domain-munging scheme. - domain_specified = False - - # assume path_specified is false - # XXX is there other stuff in here? -- e.g. comment, commentURL? - c = Cookie(0, - cookie["KEY"], cookie["VALUE"], - None, False, - domain, domain_specified, initial_dot, - cookie["PATH"], False, - secure, - expires, - discard, - None, - None, - {"flags": flags}) - if not ignore_discard and c.discard: - continue - if not ignore_expires and c.is_expired(now): - continue - CookieJar.set_cookie(self, c) - - def load_from_registry(self, ignore_discard=False, ignore_expires=False, - username=None): - """ - username: only required on win9x - - """ - cookies_dir = regload(self.reg_path, self.reg_key) - filename = os.path.normpath(os.path.join(cookies_dir, "INDEX.DAT")) - self.load(filename, ignore_discard, ignore_expires, username) - - def _really_load(self, index, filename, ignore_discard, ignore_expires, - username): - now = int(time.time()) - - if username is None: - username = os.environ['USERNAME'].lower() - - cookie_dir = os.path.dirname(filename) - - data = index.read(256) - if len(data) != 256: - raise LoadError("%s file is too short" % filename) - - # Cookies' index.dat file starts with 32 bytes of signature - # followed by an offset to the first record, stored as a little- - # endian DWORD. - sig, size, data = data[:32], data[32:36], data[36:] - size = struct.unpack("<L", size)[0] - - # check that sig is valid - if not self.magic_re.match(sig) or size != 0x4000: - raise LoadError("%s ['%s' %s] does not seem to contain cookies" % - (str(filename), sig, size)) - - # skip to start of first record - index.seek(size, 0) - - sector = 128 # size of sector in bytes - - while 1: - data = "" - - # Cookies are usually in two contiguous sectors, so read in two - # sectors and adjust if not a Cookie. - to_read = 2 * sector - d = index.read(to_read) - if len(d) != to_read: - break - data = data + d - - # Each record starts with a 4-byte signature and a count - # (little-endian DWORD) of sectors for the record. - sig, size, data = data[:4], data[4:8], data[8:] - size = struct.unpack("<L", size)[0] - - to_read = (size - 2) * sector - -## from urllib import quote -## print "data", quote(data) -## print "sig", quote(sig) -## print "size in sectors", size -## print "size in bytes", size*sector -## print "size in units of 16 bytes", (size*sector) / 16 -## print "size to read in bytes", to_read -## print - - if sig != "URL ": - assert sig in ("HASH", "LEAK", \ - self.padding, "\x00\x00\x00\x00"), \ - "unrecognized MSIE index.dat record: %s" % \ - binary_to_str(sig) - if sig == "\x00\x00\x00\x00": - # assume we've got all the cookies, and stop - break - if sig == self.padding: - continue - # skip the rest of this record - assert to_read >= 0 - if size != 2: - assert to_read != 0 - index.seek(to_read, 1) - continue - - # read in rest of record if necessary - if size > 2: - more_data = index.read(to_read) - if len(more_data) != to_read: break - data = data + more_data - - cookie_re = ("Cookie\:%s\@([\x21-\xFF]+).*?" % username + - "(%s\@[\x21-\xFF]+\.txt)" % username) - m = re.search(cookie_re, data, re.I) - if m: - cookie_file = os.path.join(cookie_dir, m.group(2)) - if not self.delayload: - try: - self.load_cookie_data(cookie_file, - ignore_discard, ignore_expires) - except (LoadError, IOError): - debug("error reading cookie file, skipping: %s", - cookie_file) - else: - domain = m.group(1) - i = domain.find("/") - if i != -1: - domain = domain[:i] - - self._delayload_domains[domain] = ( - cookie_file, ignore_discard, ignore_expires) - - -class MSIECookieJar(MSIEBase, FileCookieJar): - """FileCookieJar that reads from the Windows MSIE cookies database. - - MSIECookieJar can read the cookie files of Microsoft Internet Explorer - (MSIE) for Windows version 5 on Windows NT and version 6 on Windows XP and - Windows 98. Other configurations may also work, but are untested. Saving - cookies in MSIE format is NOT supported. If you save cookies, they'll be - in the usual Set-Cookie3 format, which you can read back in using an - instance of the plain old CookieJar class. Don't save using the same - filename that you loaded cookies from, because you may succeed in - clobbering your MSIE cookies index file! - - You should be able to have LWP share Internet Explorer's cookies like - this (note you need to supply a username to load_from_registry if you're on - Windows 9x or Windows ME): - - cj = MSIECookieJar(delayload=1) - # find cookies index file in registry and load cookies from it - cj.load_from_registry() - opener = mechanize.build_opener(mechanize.HTTPCookieProcessor(cj)) - response = opener.open("http://example.com/") - - Iterating over a delayloaded MSIECookieJar instance will not cause any - cookies to be read from disk. To force reading of all cookies from disk, - call read_all_cookies. Note that the following methods iterate over self: - clear_temporary_cookies, clear_expired_cookies, __len__, __repr__, __str__ - and as_string. - - Additional methods: - - load_from_registry(ignore_discard=False, ignore_expires=False, - username=None) - load_cookie_data(filename, ignore_discard=False, ignore_expires=False) - read_all_cookies() - - """ - def __init__(self, filename=None, delayload=False, policy=None): - MSIEBase.__init__(self) - FileCookieJar.__init__(self, filename, delayload, policy) - - def set_cookie(self, cookie): - if self.delayload: - self._delayload_domain(cookie.domain) - CookieJar.set_cookie(self, cookie) - - def _cookies_for_request(self, request): - """Return a list of cookies to be returned to server.""" - domains = self._cookies.copy() - domains.update(self._delayload_domains) - domains = domains.keys() - - cookies = [] - for domain in domains: - cookies.extend(self._cookies_for_domain(domain, request)) - return cookies - - def _cookies_for_domain(self, domain, request): - if not self._policy.domain_return_ok(domain, request): - return [] - debug("Checking %s for cookies to return", domain) - if self.delayload: - self._delayload_domain(domain) - return CookieJar._cookies_for_domain(self, domain, request) - - def read_all_cookies(self): - """Eagerly read in all cookies.""" - if self.delayload: - for domain in self._delayload_domains.keys(): - self._delayload_domain(domain) - - def load(self, filename, ignore_discard=False, ignore_expires=False, - username=None): - """Load cookies from an MSIE 'index.dat' cookies index file. - - filename: full path to cookie index file - username: only required on win9x - - """ - if filename is None: - if self.filename is not None: filename = self.filename - else: raise ValueError(MISSING_FILENAME_TEXT) - - index = open(filename, "rb") - - try: - self._really_load(index, filename, ignore_discard, ignore_expires, - username) - finally: - index.close() diff --git a/plugin.video.alfa/lib/mechanize/_opener.py b/plugin.video.alfa/lib/mechanize/_opener.py deleted file mode 100755 index f2e20b8c..00000000 --- a/plugin.video.alfa/lib/mechanize/_opener.py +++ /dev/null @@ -1,442 +0,0 @@ -"""URL opener. - -Copyright 2004-2006 John J Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -import os, urllib2, bisect, httplib, types, tempfile -try: - import threading as _threading -except ImportError: - import dummy_threading as _threading -try: - set -except NameError: - import sets - set = sets.Set - -from _request import Request -import _response -import _rfc3986 -import _sockettimeout -import _urllib2_fork -from _util import isstringlike - -open_file = open - - -class ContentTooShortError(urllib2.URLError): - def __init__(self, reason, result): - urllib2.URLError.__init__(self, reason) - self.result = result - - -def set_request_attr(req, name, value, default): - try: - getattr(req, name) - except AttributeError: - setattr(req, name, default) - if value is not default: - setattr(req, name, value) - - -class OpenerDirector(_urllib2_fork.OpenerDirector): - def __init__(self): - _urllib2_fork.OpenerDirector.__init__(self) - # really none of these are (sanely) public -- the lack of initial - # underscore on some is just due to following urllib2 - self.process_response = {} - self.process_request = {} - self._any_request = {} - self._any_response = {} - self._handler_index_valid = True - self._tempfiles = [] - - def add_handler(self, handler): - if not hasattr(handler, "add_parent"): - raise TypeError("expected BaseHandler instance, got %r" % - type(handler)) - - if handler in self.handlers: - return - # XXX why does self.handlers need to be sorted? - bisect.insort(self.handlers, handler) - handler.add_parent(self) - self._handler_index_valid = False - - def _maybe_reindex_handlers(self): - if self._handler_index_valid: - return - - handle_error = {} - handle_open = {} - process_request = {} - process_response = {} - any_request = set() - any_response = set() - unwanted = [] - - for handler in self.handlers: - added = False - for meth in dir(handler): - if meth in ["redirect_request", "do_open", "proxy_open"]: - # oops, coincidental match - continue - - if meth == "any_request": - any_request.add(handler) - added = True - continue - elif meth == "any_response": - any_response.add(handler) - added = True - continue - - ii = meth.find("_") - scheme = meth[:ii] - condition = meth[ii+1:] - - if condition.startswith("error"): - jj = meth[ii+1:].find("_") + ii + 1 - kind = meth[jj+1:] - try: - kind = int(kind) - except ValueError: - pass - lookup = handle_error.setdefault(scheme, {}) - elif condition == "open": - kind = scheme - lookup = handle_open - elif condition == "request": - kind = scheme - lookup = process_request - elif condition == "response": - kind = scheme - lookup = process_response - else: - continue - - lookup.setdefault(kind, set()).add(handler) - added = True - - if not added: - unwanted.append(handler) - - for handler in unwanted: - self.handlers.remove(handler) - - # sort indexed methods - # XXX could be cleaned up - for lookup in [process_request, process_response]: - for scheme, handlers in lookup.iteritems(): - lookup[scheme] = handlers - for scheme, lookup in handle_error.iteritems(): - for code, handlers in lookup.iteritems(): - handlers = list(handlers) - handlers.sort() - lookup[code] = handlers - for scheme, handlers in handle_open.iteritems(): - handlers = list(handlers) - handlers.sort() - handle_open[scheme] = handlers - - # cache the indexes - self.handle_error = handle_error - self.handle_open = handle_open - self.process_request = process_request - self.process_response = process_response - self._any_request = any_request - self._any_response = any_response - - def _request(self, url_or_req, data, visit, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - if isstringlike(url_or_req): - req = Request(url_or_req, data, visit=visit, timeout=timeout) - else: - # already a mechanize.Request instance - req = url_or_req - if data is not None: - req.add_data(data) - # XXX yuck - set_request_attr(req, "visit", visit, None) - set_request_attr(req, "timeout", timeout, - _sockettimeout._GLOBAL_DEFAULT_TIMEOUT) - return req - - def open(self, fullurl, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - req = self._request(fullurl, data, None, timeout) - req_scheme = req.get_type() - - self._maybe_reindex_handlers() - - # pre-process request - # XXX should we allow a Processor to change the URL scheme - # of the request? - request_processors = set(self.process_request.get(req_scheme, [])) - request_processors.update(self._any_request) - request_processors = list(request_processors) - request_processors.sort() - for processor in request_processors: - for meth_name in ["any_request", req_scheme+"_request"]: - meth = getattr(processor, meth_name, None) - if meth: - req = meth(req) - - # In Python >= 2.4, .open() supports processors already, so we must - # call ._open() instead. - urlopen = _urllib2_fork.OpenerDirector._open - response = urlopen(self, req, data) - - # post-process response - response_processors = set(self.process_response.get(req_scheme, [])) - response_processors.update(self._any_response) - response_processors = list(response_processors) - response_processors.sort() - for processor in response_processors: - for meth_name in ["any_response", req_scheme+"_response"]: - meth = getattr(processor, meth_name, None) - if meth: - response = meth(req, response) - - return response - - def error(self, proto, *args): - if proto in ['http', 'https']: - # XXX http[s] protocols are special-cased - dict = self.handle_error['http'] # https is not different than http - proto = args[2] # YUCK! - meth_name = 'http_error_%s' % proto - http_err = 1 - orig_args = args - else: - dict = self.handle_error - meth_name = proto + '_error' - http_err = 0 - args = (dict, proto, meth_name) + args - result = apply(self._call_chain, args) - if result: - return result - - if http_err: - args = (dict, 'default', 'http_error_default') + orig_args - return apply(self._call_chain, args) - - BLOCK_SIZE = 1024*8 - def retrieve(self, fullurl, filename=None, reporthook=None, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT, - open=open_file): - """Returns (filename, headers). - - For remote objects, the default filename will refer to a temporary - file. Temporary files are removed when the OpenerDirector.close() - method is called. - - For file: URLs, at present the returned filename is None. This may - change in future. - - If the actual number of bytes read is less than indicated by the - Content-Length header, raises ContentTooShortError (a URLError - subclass). The exception's .result attribute contains the (filename, - headers) that would have been returned. - - """ - req = self._request(fullurl, data, False, timeout) - scheme = req.get_type() - fp = self.open(req) - try: - headers = fp.info() - if filename is None and scheme == 'file': - # XXX req.get_selector() seems broken here, return None, - # pending sanity :-/ - return None, headers - #return urllib.url2pathname(req.get_selector()), headers - if filename: - tfp = open(filename, 'wb') - else: - path = _rfc3986.urlsplit(req.get_full_url())[2] - suffix = os.path.splitext(path)[1] - fd, filename = tempfile.mkstemp(suffix) - self._tempfiles.append(filename) - tfp = os.fdopen(fd, 'wb') - try: - result = filename, headers - bs = self.BLOCK_SIZE - size = -1 - read = 0 - blocknum = 0 - if reporthook: - if "content-length" in headers: - size = int(headers["Content-Length"]) - reporthook(blocknum, bs, size) - while 1: - block = fp.read(bs) - if block == "": - break - read += len(block) - tfp.write(block) - blocknum += 1 - if reporthook: - reporthook(blocknum, bs, size) - finally: - tfp.close() - finally: - fp.close() - - # raise exception if actual size does not match content-length header - if size >= 0 and read < size: - raise ContentTooShortError( - "retrieval incomplete: " - "got only %i out of %i bytes" % (read, size), - result - ) - - return result - - def close(self): - _urllib2_fork.OpenerDirector.close(self) - - # make it very obvious this object is no longer supposed to be used - self.open = self.error = self.retrieve = self.add_handler = None - - if self._tempfiles: - for filename in self._tempfiles: - try: - os.unlink(filename) - except OSError: - pass - del self._tempfiles[:] - - -def wrapped_open(urlopen, process_response_object, fullurl, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - success = True - try: - response = urlopen(fullurl, data, timeout) - except urllib2.HTTPError, error: - success = False - if error.fp is None: # not a response - raise - response = error - - if response is not None: - response = process_response_object(response) - - if not success: - raise response - return response - -class ResponseProcessingOpener(OpenerDirector): - - def open(self, fullurl, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - def bound_open(fullurl, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - return OpenerDirector.open(self, fullurl, data, timeout) - return wrapped_open( - bound_open, self.process_response_object, fullurl, data, timeout) - - def process_response_object(self, response): - return response - - -class SeekableResponseOpener(ResponseProcessingOpener): - def process_response_object(self, response): - return _response.seek_wrapped_response(response) - - -def isclass(obj): - return isinstance(obj, (types.ClassType, type)) - - -class OpenerFactory: - """This class's interface is quite likely to change.""" - - default_classes = [ - # handlers - _urllib2_fork.ProxyHandler, - _urllib2_fork.UnknownHandler, - _urllib2_fork.HTTPHandler, - _urllib2_fork.HTTPDefaultErrorHandler, - _urllib2_fork.HTTPRedirectHandler, - _urllib2_fork.FTPHandler, - _urllib2_fork.FileHandler, - # processors - _urllib2_fork.HTTPCookieProcessor, - _urllib2_fork.HTTPErrorProcessor, - ] - if hasattr(httplib, 'HTTPS'): - default_classes.append(_urllib2_fork.HTTPSHandler) - handlers = [] - replacement_handlers = [] - - def __init__(self, klass=OpenerDirector): - self.klass = klass - - def build_opener(self, *handlers): - """Create an opener object from a list of handlers and processors. - - The opener will use several default handlers and processors, including - support for HTTP and FTP. - - If any of the handlers passed as arguments are subclasses of the - default handlers, the default handlers will not be used. - - """ - opener = self.klass() - default_classes = list(self.default_classes) - skip = set() - for klass in default_classes: - for check in handlers: - if isclass(check): - if issubclass(check, klass): - skip.add(klass) - elif isinstance(check, klass): - skip.add(klass) - for klass in skip: - default_classes.remove(klass) - - for klass in default_classes: - opener.add_handler(klass()) - for h in handlers: - if isclass(h): - h = h() - opener.add_handler(h) - - return opener - - -build_opener = OpenerFactory().build_opener - -_opener = None -urlopen_lock = _threading.Lock() -def urlopen(url, data=None, timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - global _opener - if _opener is None: - urlopen_lock.acquire() - try: - if _opener is None: - _opener = build_opener() - finally: - urlopen_lock.release() - return _opener.open(url, data, timeout) - -def urlretrieve(url, filename=None, reporthook=None, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - global _opener - if _opener is None: - urlopen_lock.acquire() - try: - if _opener is None: - _opener = build_opener() - finally: - urlopen_lock.release() - return _opener.retrieve(url, filename, reporthook, data, timeout) - -def install_opener(opener): - global _opener - _opener = opener diff --git a/plugin.video.alfa/lib/mechanize/_pullparser.py b/plugin.video.alfa/lib/mechanize/_pullparser.py deleted file mode 100755 index f4cc756e..00000000 --- a/plugin.video.alfa/lib/mechanize/_pullparser.py +++ /dev/null @@ -1,391 +0,0 @@ -"""A simple "pull API" for HTML parsing, after Perl's HTML::TokeParser. - -Examples - -This program extracts all links from a document. It will print one -line for each link, containing the URL and the textual description -between the <A>...</A> tags: - -import pullparser, sys -f = file(sys.argv[1]) -p = pullparser.PullParser(f) -for token in p.tags("a"): - if token.type == "endtag": continue - url = dict(token.attrs).get("href", "-") - text = p.get_compressed_text(endat=("endtag", "a")) - print "%s\t%s" % (url, text) - -This program extracts the <TITLE> from the document: - -import pullparser, sys -f = file(sys.argv[1]) -p = pullparser.PullParser(f) -if p.get_tag("title"): - title = p.get_compressed_text() - print "Title: %s" % title - - -Copyright 2003-2006 John J. Lee <jjl@pobox.com> -Copyright 1998-2001 Gisle Aas (original libwww-perl code) - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses. - -""" - -import re, htmlentitydefs -import _sgmllib_copy as sgmllib -import HTMLParser -from xml.sax import saxutils - -from _html import unescape, unescape_charref - - -class NoMoreTokensError(Exception): pass - -class Token: - """Represents an HTML tag, declaration, processing instruction etc. - - Behaves as both a tuple-like object (ie. iterable) and has attributes - .type, .data and .attrs. - - >>> t = Token("starttag", "a", [("href", "http://www.python.org/")]) - >>> t == ("starttag", "a", [("href", "http://www.python.org/")]) - True - >>> (t.type, t.data) == ("starttag", "a") - True - >>> t.attrs == [("href", "http://www.python.org/")] - True - - Public attributes - - type: one of "starttag", "endtag", "startendtag", "charref", "entityref", - "data", "comment", "decl", "pi", after the corresponding methods of - HTMLParser.HTMLParser - data: For a tag, the tag name; otherwise, the relevant data carried by the - tag, as a string - attrs: list of (name, value) pairs representing HTML attributes - (or None if token does not represent an opening tag) - - """ - def __init__(self, type, data, attrs=None): - self.type = type - self.data = data - self.attrs = attrs - def __iter__(self): - return iter((self.type, self.data, self.attrs)) - def __eq__(self, other): - type, data, attrs = other - if (self.type == type and - self.data == data and - self.attrs == attrs): - return True - else: - return False - def __ne__(self, other): return not self.__eq__(other) - def __repr__(self): - args = ", ".join(map(repr, [self.type, self.data, self.attrs])) - return self.__class__.__name__+"(%s)" % args - - def __str__(self): - """ - >>> print Token("starttag", "br") - <br> - >>> print Token("starttag", "a", - ... [("href", "http://www.python.org/"), ("alt", '"foo"')]) - <a href="http://www.python.org/" alt='"foo"'> - >>> print Token("startendtag", "br") - <br /> - >>> print Token("startendtag", "br", [("spam", "eggs")]) - <br spam="eggs" /> - >>> print Token("endtag", "p") - </p> - >>> print Token("charref", "38") - & - >>> print Token("entityref", "amp") - & - >>> print Token("data", "foo\\nbar") - foo - bar - >>> print Token("comment", "Life is a bowl\\nof cherries.") - <!--Life is a bowl - of cherries.--> - >>> print Token("decl", "decl") - <!decl> - >>> print Token("pi", "pi") - <?pi> - """ - if self.attrs is not None: - attrs = "".join([" %s=%s" % (k, saxutils.quoteattr(v)) for - k, v in self.attrs]) - else: - attrs = "" - if self.type == "starttag": - return "<%s%s>" % (self.data, attrs) - elif self.type == "startendtag": - return "<%s%s />" % (self.data, attrs) - elif self.type == "endtag": - return "</%s>" % self.data - elif self.type == "charref": - return "&#%s;" % self.data - elif self.type == "entityref": - return "&%s;" % self.data - elif self.type == "data": - return self.data - elif self.type == "comment": - return "<!--%s-->" % self.data - elif self.type == "decl": - return "<!%s>" % self.data - elif self.type == "pi": - return "<?%s>" % self.data - assert False - - -def iter_until_exception(fn, exception, *args, **kwds): - while 1: - try: - yield fn(*args, **kwds) - except exception: - raise StopIteration - - -class _AbstractParser: - chunk = 1024 - compress_re = re.compile(r"\s+") - def __init__(self, fh, textify={"img": "alt", "applet": "alt"}, - encoding="ascii", entitydefs=None): - """ - fh: file-like object (only a .read() method is required) from which to - read HTML to be parsed - textify: mapping used by .get_text() and .get_compressed_text() methods - to represent opening tags as text - encoding: encoding used to encode numeric character references by - .get_text() and .get_compressed_text() ("ascii" by default) - - entitydefs: mapping like {"amp": "&", ...} containing HTML entity - definitions (a sensible default is used). This is used to unescape - entities in .get_text() (and .get_compressed_text()) and attribute - values. If the encoding can not represent the character, the entity - reference is left unescaped. Note that entity references (both - numeric - e.g. { or ઼ - and non-numeric - e.g. &) are - unescaped in attribute values and the return value of .get_text(), but - not in data outside of tags. Instead, entity references outside of - tags are represented as tokens. This is a bit odd, it's true :-/ - - If the element name of an opening tag matches a key in the textify - mapping then that tag is converted to text. The corresponding value is - used to specify which tag attribute to obtain the text from. textify - maps from element names to either: - - - an HTML attribute name, in which case the HTML attribute value is - used as its text value along with the element name in square - brackets (e.g. "alt text goes here[IMG]", or, if the alt attribute - were missing, just "[IMG]") - - a callable object (e.g. a function) which takes a Token and returns - the string to be used as its text value - - If textify has no key for an element name, nothing is substituted for - the opening tag. - - Public attributes: - - encoding and textify: see above - - """ - self._fh = fh - self._tokenstack = [] # FIFO - self.textify = textify - self.encoding = encoding - if entitydefs is None: - entitydefs = htmlentitydefs.name2codepoint - self._entitydefs = entitydefs - - def __iter__(self): return self - - def tags(self, *names): - return iter_until_exception(self.get_tag, NoMoreTokensError, *names) - - def tokens(self, *tokentypes): - return iter_until_exception(self.get_token, NoMoreTokensError, - *tokentypes) - - def next(self): - try: - return self.get_token() - except NoMoreTokensError: - raise StopIteration() - - def get_token(self, *tokentypes): - """Pop the next Token object from the stack of parsed tokens. - - If arguments are given, they are taken to be token types in which the - caller is interested: tokens representing other elements will be - skipped. Element names must be given in lower case. - - Raises NoMoreTokensError. - - """ - while 1: - while self._tokenstack: - token = self._tokenstack.pop(0) - if tokentypes: - if token.type in tokentypes: - return token - else: - return token - data = self._fh.read(self.chunk) - if not data: - raise NoMoreTokensError() - self.feed(data) - - def unget_token(self, token): - """Push a Token back onto the stack.""" - self._tokenstack.insert(0, token) - - def get_tag(self, *names): - """Return the next Token that represents an opening or closing tag. - - If arguments are given, they are taken to be element names in which the - caller is interested: tags representing other elements will be skipped. - Element names must be given in lower case. - - Raises NoMoreTokensError. - - """ - while 1: - tok = self.get_token() - if tok.type not in ["starttag", "endtag", "startendtag"]: - continue - if names: - if tok.data in names: - return tok - else: - return tok - - def get_text(self, endat=None): - """Get some text. - - endat: stop reading text at this tag (the tag is included in the - returned text); endtag is a tuple (type, name) where type is - "starttag", "endtag" or "startendtag", and name is the element name of - the tag (element names must be given in lower case) - - If endat is not given, .get_text() will stop at the next opening or - closing tag, or when there are no more tokens (no exception is raised). - Note that .get_text() includes the text representation (if any) of the - opening tag, but pushes the opening tag back onto the stack. As a - result, if you want to call .get_text() again, you need to call - .get_tag() first (unless you want an empty string returned when you - next call .get_text()). - - Entity references are translated using the value of the entitydefs - constructor argument (a mapping from names to characters like that - provided by the standard module htmlentitydefs). Named entity - references that are not in this mapping are left unchanged. - - The textify attribute is used to translate opening tags into text: see - the class docstring. - - """ - text = [] - tok = None - while 1: - try: - tok = self.get_token() - except NoMoreTokensError: - # unget last token (not the one we just failed to get) - if tok: self.unget_token(tok) - break - if tok.type == "data": - text.append(tok.data) - elif tok.type == "entityref": - t = unescape("&%s;"%tok.data, self._entitydefs, self.encoding) - text.append(t) - elif tok.type == "charref": - t = unescape_charref(tok.data, self.encoding) - text.append(t) - elif tok.type in ["starttag", "endtag", "startendtag"]: - tag_name = tok.data - if tok.type in ["starttag", "startendtag"]: - alt = self.textify.get(tag_name) - if alt is not None: - if callable(alt): - text.append(alt(tok)) - elif tok.attrs is not None: - for k, v in tok.attrs: - if k == alt: - text.append(v) - text.append("[%s]" % tag_name.upper()) - if endat is None or endat == (tok.type, tag_name): - self.unget_token(tok) - break - return "".join(text) - - def get_compressed_text(self, *args, **kwds): - """ - As .get_text(), but collapses each group of contiguous whitespace to a - single space character, and removes all initial and trailing - whitespace. - - """ - text = self.get_text(*args, **kwds) - text = text.strip() - return self.compress_re.sub(" ", text) - - def handle_startendtag(self, tag, attrs): - self._tokenstack.append(Token("startendtag", tag, attrs)) - def handle_starttag(self, tag, attrs): - self._tokenstack.append(Token("starttag", tag, attrs)) - def handle_endtag(self, tag): - self._tokenstack.append(Token("endtag", tag)) - def handle_charref(self, name): - self._tokenstack.append(Token("charref", name)) - def handle_entityref(self, name): - self._tokenstack.append(Token("entityref", name)) - def handle_data(self, data): - self._tokenstack.append(Token("data", data)) - def handle_comment(self, data): - self._tokenstack.append(Token("comment", data)) - def handle_decl(self, decl): - self._tokenstack.append(Token("decl", decl)) - def unknown_decl(self, data): - # XXX should this call self.error instead? - #self.error("unknown declaration: " + `data`) - self._tokenstack.append(Token("decl", data)) - def handle_pi(self, data): - self._tokenstack.append(Token("pi", data)) - - def unescape_attr(self, name): - return unescape(name, self._entitydefs, self.encoding) - def unescape_attrs(self, attrs): - escaped_attrs = [] - for key, val in attrs: - escaped_attrs.append((key, self.unescape_attr(val))) - return escaped_attrs - -class PullParser(_AbstractParser, HTMLParser.HTMLParser): - def __init__(self, *args, **kwds): - HTMLParser.HTMLParser.__init__(self) - _AbstractParser.__init__(self, *args, **kwds) - def unescape(self, name): - # Use the entitydefs passed into constructor, not - # HTMLParser.HTMLParser's entitydefs. - return self.unescape_attr(name) - -class TolerantPullParser(_AbstractParser, sgmllib.SGMLParser): - def __init__(self, *args, **kwds): - sgmllib.SGMLParser.__init__(self) - _AbstractParser.__init__(self, *args, **kwds) - def unknown_starttag(self, tag, attrs): - attrs = self.unescape_attrs(attrs) - self._tokenstack.append(Token("starttag", tag, attrs)) - def unknown_endtag(self, tag): - self._tokenstack.append(Token("endtag", tag)) - - -def _test(): - import doctest, _pullparser - return doctest.testmod(_pullparser) - -if __name__ == "__main__": - _test() diff --git a/plugin.video.alfa/lib/mechanize/_request.py b/plugin.video.alfa/lib/mechanize/_request.py deleted file mode 100755 index 79903363..00000000 --- a/plugin.video.alfa/lib/mechanize/_request.py +++ /dev/null @@ -1,40 +0,0 @@ -"""Integration with Python standard library module urllib2: Request class. - -Copyright 2004-2006 John J Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -import logging - -import _rfc3986 -import _sockettimeout -import _urllib2_fork - -warn = logging.getLogger("mechanize").warning - - -class Request(_urllib2_fork.Request): - def __init__(self, url, data=None, headers={}, - origin_req_host=None, unverifiable=False, visit=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - # In mechanize 0.2, the interpretation of a unicode url argument will - # change: A unicode url argument will be interpreted as an IRI, and a - # bytestring as a URI. For now, we accept unicode or bytestring. We - # don't insist that the value is always a URI (specifically, must only - # contain characters which are legal), because that might break working - # code (who knows what bytes some servers want to see, especially with - # browser plugins for internationalised URIs). - if not _rfc3986.is_clean_uri(url): - warn("url argument is not a URI " - "(contains illegal characters) %r" % url) - _urllib2_fork.Request.__init__(self, url, data, headers) - self.selector = None - self.visit = visit - self.timeout = timeout - - def __str__(self): - return "<Request for %s>" % self.get_full_url() diff --git a/plugin.video.alfa/lib/mechanize/_response.py b/plugin.video.alfa/lib/mechanize/_response.py deleted file mode 100755 index e039823c..00000000 --- a/plugin.video.alfa/lib/mechanize/_response.py +++ /dev/null @@ -1,525 +0,0 @@ -"""Response classes. - -The seek_wrapper code is not used if you're using UserAgent with -.set_seekable_responses(False), or if you're using the urllib2-level interface -HTTPEquivProcessor. Class closeable_response is instantiated by some handlers -(AbstractHTTPHandler), but the closeable_response interface is only depended -upon by Browser-level code. Function upgrade_response is only used if you're -using Browser. - - -Copyright 2006 John J. Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file COPYING.txt -included with the distribution). - -""" - -import copy, mimetools, urllib2 -from cStringIO import StringIO - - -def len_of_seekable(file_): - # this function exists because evaluation of len(file_.getvalue()) on every - # .read() from seek_wrapper would be O(N**2) in number of .read()s - pos = file_.tell() - file_.seek(0, 2) # to end - try: - return file_.tell() - finally: - file_.seek(pos) - - -# XXX Andrew Dalke kindly sent me a similar class in response to my request on -# comp.lang.python, which I then proceeded to lose. I wrote this class -# instead, but I think he's released his code publicly since, could pinch the -# tests from it, at least... - -# For testing seek_wrapper invariant (note that -# test_urllib2.HandlerTest.test_seekable is expected to fail when this -# invariant checking is turned on). The invariant checking is done by module -# ipdc, which is available here: -# http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/436834 -## from ipdbc import ContractBase -## class seek_wrapper(ContractBase): -class seek_wrapper: - """Adds a seek method to a file object. - - This is only designed for seeking on readonly file-like objects. - - Wrapped file-like object must have a read method. The readline method is - only supported if that method is present on the wrapped object. The - readlines method is always supported. xreadlines and iteration are - supported only for Python 2.2 and above. - - Public attributes: - - wrapped: the wrapped file object - is_closed: true iff .close() has been called - - WARNING: All other attributes of the wrapped object (ie. those that are not - one of wrapped, read, readline, readlines, xreadlines, __iter__ and next) - are passed through unaltered, which may or may not make sense for your - particular file object. - - """ - # General strategy is to check that cache is full enough, then delegate to - # the cache (self.__cache, which is a cStringIO.StringIO instance). A seek - # position (self.__pos) is maintained independently of the cache, in order - # that a single cache may be shared between multiple seek_wrapper objects. - # Copying using module copy shares the cache in this way. - - def __init__(self, wrapped): - self.wrapped = wrapped - self.__read_complete_state = [False] - self.__is_closed_state = [False] - self.__have_readline = hasattr(self.wrapped, "readline") - self.__cache = StringIO() - self.__pos = 0 # seek position - - def invariant(self): - # The end of the cache is always at the same place as the end of the - # wrapped file (though the .tell() method is not required to be present - # on wrapped file). - return self.wrapped.tell() == len(self.__cache.getvalue()) - - def close(self): - self.wrapped.close() - self.is_closed = True - - def __getattr__(self, name): - if name == "is_closed": - return self.__is_closed_state[0] - elif name == "read_complete": - return self.__read_complete_state[0] - - wrapped = self.__dict__.get("wrapped") - if wrapped: - return getattr(wrapped, name) - - return getattr(self.__class__, name) - - def __setattr__(self, name, value): - if name == "is_closed": - self.__is_closed_state[0] = bool(value) - elif name == "read_complete": - if not self.is_closed: - self.__read_complete_state[0] = bool(value) - else: - self.__dict__[name] = value - - def seek(self, offset, whence=0): - assert whence in [0,1,2] - - # how much data, if any, do we need to read? - if whence == 2: # 2: relative to end of *wrapped* file - if offset < 0: raise ValueError("negative seek offset") - # since we don't know yet where the end of that file is, we must - # read everything - to_read = None - else: - if whence == 0: # 0: absolute - if offset < 0: raise ValueError("negative seek offset") - dest = offset - else: # 1: relative to current position - pos = self.__pos - if pos < offset: - raise ValueError("seek to before start of file") - dest = pos + offset - end = len_of_seekable(self.__cache) - to_read = dest - end - if to_read < 0: - to_read = 0 - - if to_read != 0: - self.__cache.seek(0, 2) - if to_read is None: - assert whence == 2 - self.__cache.write(self.wrapped.read()) - self.read_complete = True - self.__pos = self.__cache.tell() - offset - else: - data = self.wrapped.read(to_read) - if not data: - self.read_complete = True - else: - self.__cache.write(data) - # Don't raise an exception even if we've seek()ed past the end - # of .wrapped, since fseek() doesn't complain in that case. - # Also like fseek(), pretend we have seek()ed past the end, - # i.e. not: - #self.__pos = self.__cache.tell() - # but rather: - self.__pos = dest - else: - self.__pos = dest - - def tell(self): - return self.__pos - - def __copy__(self): - cpy = self.__class__(self.wrapped) - cpy.__cache = self.__cache - cpy.__read_complete_state = self.__read_complete_state - cpy.__is_closed_state = self.__is_closed_state - return cpy - - def get_data(self): - pos = self.__pos - try: - self.seek(0) - return self.read(-1) - finally: - self.__pos = pos - - def read(self, size=-1): - pos = self.__pos - end = len_of_seekable(self.__cache) - available = end - pos - - # enough data already cached? - if size <= available and size != -1: - self.__cache.seek(pos) - self.__pos = pos+size - return self.__cache.read(size) - - # no, so read sufficient data from wrapped file and cache it - self.__cache.seek(0, 2) - if size == -1: - self.__cache.write(self.wrapped.read()) - self.read_complete = True - else: - to_read = size - available - assert to_read > 0 - data = self.wrapped.read(to_read) - if not data: - self.read_complete = True - else: - self.__cache.write(data) - self.__cache.seek(pos) - - data = self.__cache.read(size) - self.__pos = self.__cache.tell() - assert self.__pos == pos + len(data) - return data - - def readline(self, size=-1): - if not self.__have_readline: - raise NotImplementedError("no readline method on wrapped object") - - # line we're about to read might not be complete in the cache, so - # read another line first - pos = self.__pos - self.__cache.seek(0, 2) - data = self.wrapped.readline() - if not data: - self.read_complete = True - else: - self.__cache.write(data) - self.__cache.seek(pos) - - data = self.__cache.readline() - if size != -1: - r = data[:size] - self.__pos = pos+size - else: - r = data - self.__pos = pos+len(data) - return r - - def readlines(self, sizehint=-1): - pos = self.__pos - self.__cache.seek(0, 2) - self.__cache.write(self.wrapped.read()) - self.read_complete = True - self.__cache.seek(pos) - data = self.__cache.readlines(sizehint) - self.__pos = self.__cache.tell() - return data - - def __iter__(self): return self - def next(self): - line = self.readline() - if line == "": raise StopIteration - return line - - xreadlines = __iter__ - - def __repr__(self): - return ("<%s at %s whose wrapped object = %r>" % - (self.__class__.__name__, hex(abs(id(self))), self.wrapped)) - - -class response_seek_wrapper(seek_wrapper): - - """ - Supports copying response objects and setting response body data. - - """ - - def __init__(self, wrapped): - seek_wrapper.__init__(self, wrapped) - self._headers = self.wrapped.info() - - def __copy__(self): - cpy = seek_wrapper.__copy__(self) - # copy headers from delegate - cpy._headers = copy.copy(self.info()) - return cpy - - # Note that .info() and .geturl() (the only two urllib2 response methods - # that are not implemented by seek_wrapper) must be here explicitly rather - # than by seek_wrapper's __getattr__ delegation) so that the nasty - # dynamically-created HTTPError classes in get_seek_wrapper_class() get the - # wrapped object's implementation, and not HTTPError's. - - def info(self): - return self._headers - - def geturl(self): - return self.wrapped.geturl() - - def set_data(self, data): - self.seek(0) - self.read() - self.close() - cache = self._seek_wrapper__cache = StringIO() - cache.write(data) - self.seek(0) - - -class eoffile: - # file-like object that always claims to be at end-of-file... - def read(self, size=-1): return "" - def readline(self, size=-1): return "" - def __iter__(self): return self - def next(self): return "" - def close(self): pass - -class eofresponse(eoffile): - def __init__(self, url, headers, code, msg): - self._url = url - self._headers = headers - self.code = code - self.msg = msg - def geturl(self): return self._url - def info(self): return self._headers - - -class closeable_response: - """Avoids unnecessarily clobbering urllib.addinfourl methods on .close(). - - Only supports responses returned by mechanize.HTTPHandler. - - After .close(), the following methods are supported: - - .read() - .readline() - .info() - .geturl() - .__iter__() - .next() - .close() - - and the following attributes are supported: - - .code - .msg - - Also supports pickling (but the stdlib currently does something to prevent - it: http://python.org/sf/1144636). - - """ - # presence of this attr indicates is useable after .close() - closeable_response = None - - def __init__(self, fp, headers, url, code, msg): - self._set_fp(fp) - self._headers = headers - self._url = url - self.code = code - self.msg = msg - - def _set_fp(self, fp): - self.fp = fp - self.read = self.fp.read - self.readline = self.fp.readline - if hasattr(self.fp, "readlines"): self.readlines = self.fp.readlines - if hasattr(self.fp, "fileno"): - self.fileno = self.fp.fileno - else: - self.fileno = lambda: None - self.__iter__ = self.fp.__iter__ - self.next = self.fp.next - - def __repr__(self): - return '<%s at %s whose fp = %r>' % ( - self.__class__.__name__, hex(abs(id(self))), self.fp) - - def info(self): - return self._headers - - def geturl(self): - return self._url - - def close(self): - wrapped = self.fp - wrapped.close() - new_wrapped = eofresponse( - self._url, self._headers, self.code, self.msg) - self._set_fp(new_wrapped) - - def __getstate__(self): - # There are three obvious options here: - # 1. truncate - # 2. read to end - # 3. close socket, pickle state including read position, then open - # again on unpickle and use Range header - # XXXX um, 4. refuse to pickle unless .close()d. This is better, - # actually ("errors should never pass silently"). Pickling doesn't - # work anyway ATM, because of http://python.org/sf/1144636 so fix - # this later - - # 2 breaks pickle protocol, because one expects the original object - # to be left unscathed by pickling. 3 is too complicated and - # surprising (and too much work ;-) to happen in a sane __getstate__. - # So we do 1. - - state = self.__dict__.copy() - new_wrapped = eofresponse( - self._url, self._headers, self.code, self.msg) - state["wrapped"] = new_wrapped - return state - -def test_response(data='test data', headers=[], - url="http://example.com/", code=200, msg="OK"): - return make_response(data, headers, url, code, msg) - -def test_html_response(data='test data', headers=[], - url="http://example.com/", code=200, msg="OK"): - headers += [("Content-type", "text/html")] - return make_response(data, headers, url, code, msg) - -def make_response(data, headers, url, code, msg): - """Convenient factory for objects implementing response interface. - - data: string containing response body data - headers: sequence of (name, value) pairs - url: URL of response - code: integer response code (e.g. 200) - msg: string response code message (e.g. "OK") - - """ - mime_headers = make_headers(headers) - r = closeable_response(StringIO(data), mime_headers, url, code, msg) - return response_seek_wrapper(r) - - -def make_headers(headers): - """ - headers: sequence of (name, value) pairs - """ - hdr_text = [] - for name_value in headers: - hdr_text.append("%s: %s" % name_value) - return mimetools.Message(StringIO("\n".join(hdr_text))) - - -# Rest of this module is especially horrible, but needed, at least until fork -# urllib2. Even then, may want to preseve urllib2 compatibility. - -def get_seek_wrapper_class(response): - # in order to wrap response objects that are also exceptions, we must - # dynamically subclass the exception :-((( - if (isinstance(response, urllib2.HTTPError) and - not hasattr(response, "seek")): - if response.__class__.__module__ == "__builtin__": - exc_class_name = response.__class__.__name__ - else: - exc_class_name = "%s.%s" % ( - response.__class__.__module__, response.__class__.__name__) - - class httperror_seek_wrapper(response_seek_wrapper, response.__class__): - # this only derives from HTTPError in order to be a subclass -- - # the HTTPError behaviour comes from delegation - - _exc_class_name = exc_class_name - - def __init__(self, wrapped): - response_seek_wrapper.__init__(self, wrapped) - # be compatible with undocumented HTTPError attributes :-( - self.hdrs = wrapped.info() - self.filename = wrapped.geturl() - - def __repr__(self): - return ( - "<%s (%s instance) at %s " - "whose wrapped object = %r>" % ( - self.__class__.__name__, self._exc_class_name, - hex(abs(id(self))), self.wrapped) - ) - wrapper_class = httperror_seek_wrapper - else: - wrapper_class = response_seek_wrapper - return wrapper_class - -def seek_wrapped_response(response): - """Return a copy of response that supports seekable response interface. - - Accepts responses from both mechanize and urllib2 handlers. - - Copes with both ordinary response instances and HTTPError instances (which - can't be simply wrapped due to the requirement of preserving the exception - base class). - """ - if not hasattr(response, "seek"): - wrapper_class = get_seek_wrapper_class(response) - response = wrapper_class(response) - assert hasattr(response, "get_data") - return response - -def upgrade_response(response): - """Return a copy of response that supports Browser response interface. - - Browser response interface is that of "seekable responses" - (response_seek_wrapper), plus the requirement that responses must be - useable after .close() (closeable_response). - - Accepts responses from both mechanize and urllib2 handlers. - - Copes with both ordinary response instances and HTTPError instances (which - can't be simply wrapped due to the requirement of preserving the exception - base class). - """ - wrapper_class = get_seek_wrapper_class(response) - if hasattr(response, "closeable_response"): - if not hasattr(response, "seek"): - response = wrapper_class(response) - assert hasattr(response, "get_data") - return copy.copy(response) - - # a urllib2 handler constructed the response, i.e. the response is an - # urllib.addinfourl or a urllib2.HTTPError, instead of a - # _Util.closeable_response as returned by e.g. mechanize.HTTPHandler - try: - code = response.code - except AttributeError: - code = None - try: - msg = response.msg - except AttributeError: - msg = None - - # may have already-.read() data from .seek() cache - data = None - get_data = getattr(response, "get_data", None) - if get_data: - data = get_data() - - response = closeable_response( - response.fp, response.info(), response.geturl(), code, msg) - response = wrapper_class(response) - if data: - response.set_data(data) - return response diff --git a/plugin.video.alfa/lib/mechanize/_rfc3986.py b/plugin.video.alfa/lib/mechanize/_rfc3986.py deleted file mode 100755 index 0ba56fef..00000000 --- a/plugin.video.alfa/lib/mechanize/_rfc3986.py +++ /dev/null @@ -1,245 +0,0 @@ -"""RFC 3986 URI parsing and relative reference resolution / absolutization. - -(aka splitting and joining) - -Copyright 2006 John J. Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it under -the terms of the BSD or ZPL 2.1 licenses (see the file COPYING.txt -included with the distribution). - -""" - -# XXX Wow, this is ugly. Overly-direct translation of the RFC ATM. - -import re, urllib - -## def chr_range(a, b): -## return "".join(map(chr, range(ord(a), ord(b)+1))) - -## UNRESERVED_URI_CHARS = ("ABCDEFGHIJKLMNOPQRSTUVWXYZ" -## "abcdefghijklmnopqrstuvwxyz" -## "0123456789" -## "-_.~") -## RESERVED_URI_CHARS = "!*'();:@&=+$,/?#[]" -## URI_CHARS = RESERVED_URI_CHARS+UNRESERVED_URI_CHARS+'%' -# this re matches any character that's not in URI_CHARS -BAD_URI_CHARS_RE = re.compile("[^A-Za-z0-9\-_.~!*'();:@&=+$,/?%#[\]]") - - -def clean_url(url, encoding): - # percent-encode illegal URI characters - # Trying to come up with test cases for this gave me a headache, revisit - # when do switch to unicode. - # Somebody else's comments (lost the attribution): -## - IE will return you the url in the encoding you send it -## - Mozilla/Firefox will send you latin-1 if there's no non latin-1 -## characters in your link. It will send you utf-8 however if there are... - if type(url) == type(""): - url = url.decode(encoding, "replace") - url = url.strip() - # for second param to urllib.quote(), we want URI_CHARS, minus the - # 'always_safe' characters that urllib.quote() never percent-encodes - return urllib.quote(url.encode(encoding), "!*'();:@&=+$,/?%#[]~") - -def is_clean_uri(uri): - """ - >>> is_clean_uri("ABC!") - True - >>> is_clean_uri(u"ABC!") - True - >>> is_clean_uri("ABC|") - False - >>> is_clean_uri(u"ABC|") - False - >>> is_clean_uri("http://example.com/0") - True - >>> is_clean_uri(u"http://example.com/0") - True - """ - # note module re treats bytestrings as through they were decoded as latin-1 - # so this function accepts both unicode and bytestrings - return not bool(BAD_URI_CHARS_RE.search(uri)) - - -SPLIT_MATCH = re.compile( - r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?").match -def urlsplit(absolute_uri): - """Return scheme, authority, path, query, fragment.""" - match = SPLIT_MATCH(absolute_uri) - if match: - g = match.groups() - return g[1], g[3], g[4], g[6], g[8] - -def urlunsplit(parts): - scheme, authority, path, query, fragment = parts - r = [] - append = r.append - if scheme is not None: - append(scheme) - append(":") - if authority is not None: - append("//") - append(authority) - append(path) - if query is not None: - append("?") - append(query) - if fragment is not None: - append("#") - append(fragment) - return "".join(r) - -def urljoin(base_uri, uri_reference): - """Join a base URI with a URI reference and return the resulting URI. - - See RFC 3986. - """ - return urlunsplit(urljoin_parts(urlsplit(base_uri), - urlsplit(uri_reference))) - -# oops, this doesn't do the same thing as the literal translation -# from the RFC below -## import posixpath -## def urljoin_parts(base_parts, reference_parts): -## scheme, authority, path, query, fragment = base_parts -## rscheme, rauthority, rpath, rquery, rfragment = reference_parts - -## # compute target URI path -## if rpath == "": -## tpath = path -## else: -## tpath = rpath -## if not tpath.startswith("/"): -## tpath = merge(authority, path, tpath) -## tpath = posixpath.normpath(tpath) - -## if rscheme is not None: -## return (rscheme, rauthority, tpath, rquery, rfragment) -## elif rauthority is not None: -## return (scheme, rauthority, tpath, rquery, rfragment) -## elif rpath == "": -## if rquery is not None: -## tquery = rquery -## else: -## tquery = query -## return (scheme, authority, tpath, tquery, rfragment) -## else: -## return (scheme, authority, tpath, rquery, rfragment) - -def urljoin_parts(base_parts, reference_parts): - scheme, authority, path, query, fragment = base_parts - rscheme, rauthority, rpath, rquery, rfragment = reference_parts - - if rscheme == scheme: - rscheme = None - - if rscheme is not None: - tscheme, tauthority, tpath, tquery = ( - rscheme, rauthority, remove_dot_segments(rpath), rquery) - else: - if rauthority is not None: - tauthority, tpath, tquery = ( - rauthority, remove_dot_segments(rpath), rquery) - else: - if rpath == "": - tpath = path - if rquery is not None: - tquery = rquery - else: - tquery = query - else: - if rpath.startswith("/"): - tpath = remove_dot_segments(rpath) - else: - tpath = merge(authority, path, rpath) - tpath = remove_dot_segments(tpath) - tquery = rquery - tauthority = authority - tscheme = scheme - tfragment = rfragment - return (tscheme, tauthority, tpath, tquery, tfragment) - -# um, something *vaguely* like this is what I want, but I have to generate -# lots of test cases first, if only to understand what it is that -# remove_dot_segments really does... -## def remove_dot_segments(path): -## if path == '': -## return '' -## comps = path.split('/') -## new_comps = [] -## for comp in comps: -## if comp in ['.', '']: -## if not new_comps or new_comps[-1]: -## new_comps.append('') -## continue -## if comp != '..': -## new_comps.append(comp) -## elif new_comps: -## new_comps.pop() -## return '/'.join(new_comps) - - -def remove_dot_segments(path): - r = [] - while path: - # A - if path.startswith("../"): - path = path[3:] - continue - if path.startswith("./"): - path = path[2:] - continue - # B - if path.startswith("/./"): - path = path[2:] - continue - if path == "/.": - path = "/" - continue - # C - if path.startswith("/../"): - path = path[3:] - if r: - r.pop() - continue - if path == "/..": - path = "/" - if r: - r.pop() - continue - # D - if path == ".": - path = path[1:] - continue - if path == "..": - path = path[2:] - continue - # E - start = 0 - if path.startswith("/"): - start = 1 - ii = path.find("/", start) - if ii < 0: - ii = None - r.append(path[:ii]) - if ii is None: - break - path = path[ii:] - return "".join(r) - -def merge(base_authority, base_path, ref_path): - # XXXX Oddly, the sample Perl implementation of this by Roy Fielding - # doesn't even take base_authority as a parameter, despite the wording in - # the RFC suggesting otherwise. Perhaps I'm missing some obvious identity. - #if base_authority is not None and base_path == "": - if base_path == "": - return "/" + ref_path - ii = base_path.rfind("/") - if ii >= 0: - return base_path[:ii+1] + ref_path - return ref_path - -if __name__ == "__main__": - import doctest - doctest.testmod() diff --git a/plugin.video.alfa/lib/mechanize/_sgmllib_copy.py b/plugin.video.alfa/lib/mechanize/_sgmllib_copy.py deleted file mode 100755 index b2ad1f77..00000000 --- a/plugin.video.alfa/lib/mechanize/_sgmllib_copy.py +++ /dev/null @@ -1,559 +0,0 @@ -# Taken from Python 2.6.4 and regexp module constants modified -"""A parser for SGML, using the derived class as a static DTD.""" - -# XXX This only supports those SGML features used by HTML. - -# XXX There should be a way to distinguish between PCDATA (parsed -# character data -- the normal case), RCDATA (replaceable character -# data -- only char and entity references and end tags are special) -# and CDATA (character data -- only end tags are special). RCDATA is -# not supported at all. - - -# from warnings import warnpy3k -# warnpy3k("the sgmllib module has been removed in Python 3.0", -# stacklevel=2) -# del warnpy3k - -import markupbase -import re - -__all__ = ["SGMLParser", "SGMLParseError"] - -# Regular expressions used for parsing - -interesting = re.compile('[&<]') -incomplete = re.compile('&([a-zA-Z][a-zA-Z0-9]*|#[0-9]*)?|' - '<([a-zA-Z][^<>]*|' - '/([a-zA-Z][^<>]*)?|' - '![^<>]*)?') - -entityref = re.compile('&([a-zA-Z][-.a-zA-Z0-9]*)[^a-zA-Z0-9]') -# hack to fix http://bugs.python.org/issue803422 -# charref = re.compile('&#([0-9]+)[^0-9]') -charref = re.compile("&#(x?[0-9a-fA-F]+)[^0-9a-fA-F]") - -starttagopen = re.compile('<[>a-zA-Z]') -shorttagopen = re.compile('<[a-zA-Z][-.a-zA-Z0-9]*/') -shorttag = re.compile('<([a-zA-Z][-.a-zA-Z0-9]*)/([^/]*)/') -piclose = re.compile('>') -endbracket = re.compile('[<>]') -# hack moved from _beautifulsoup.py (bundled BeautifulSoup version 2) -#This code makes Beautiful Soup able to parse XML with namespaces -# tagfind = re.compile('[a-zA-Z][-_.a-zA-Z0-9]*') -tagfind = re.compile('[a-zA-Z][-_.:a-zA-Z0-9]*') -attrfind = re.compile( - r'\s*([a-zA-Z_][-:.a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[][\-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~\'"@]*))?') - - -class SGMLParseError(RuntimeError): - """Exception raised for all parse errors.""" - pass - - -# SGML parser base class -- find tags and call handler functions. -# Usage: p = SGMLParser(); p.feed(data); ...; p.close(). -# The dtd is defined by deriving a class which defines methods -# with special names to handle tags: start_foo and end_foo to handle -# <foo> and </foo>, respectively, or do_foo to handle <foo> by itself. -# (Tags are converted to lower case for this purpose.) The data -# between tags is passed to the parser by calling self.handle_data() -# with some data as argument (the data may be split up in arbitrary -# chunks). Entity references are passed by calling -# self.handle_entityref() with the entity reference as argument. - -class SGMLParser(markupbase.ParserBase): - # Definition of entities -- derived classes may override - entity_or_charref = re.compile('&(?:' - '([a-zA-Z][-.a-zA-Z0-9]*)|#([0-9]+)' - ')(;?)') - - def __init__(self, verbose=0): - """Initialize and reset this instance.""" - self.verbose = verbose - self.reset() - - def reset(self): - """Reset this instance. Loses all unprocessed data.""" - self.__starttag_text = None - self.rawdata = '' - self.stack = [] - self.lasttag = '???' - self.nomoretags = 0 - self.literal = 0 - markupbase.ParserBase.reset(self) - - def setnomoretags(self): - """Enter literal mode (CDATA) till EOF. - - Intended for derived classes only. - """ - self.nomoretags = self.literal = 1 - - def setliteral(self, *args): - """Enter literal mode (CDATA). - - Intended for derived classes only. - """ - self.literal = 1 - - def feed(self, data): - """Feed some data to the parser. - - Call this as often as you want, with as little or as much text - as you want (may include '\n'). (This just saves the text, - all the processing is done by goahead().) - """ - - self.rawdata = self.rawdata + data - self.goahead(0) - - def close(self): - """Handle the remaining data.""" - self.goahead(1) - - def error(self, message): - raise SGMLParseError(message) - - # Internal -- handle data as far as reasonable. May leave state - # and data to be processed by a subsequent call. If 'end' is - # true, force handling all data as if followed by EOF marker. - def goahead(self, end): - rawdata = self.rawdata - i = 0 - n = len(rawdata) - while i < n: - if self.nomoretags: - self.handle_data(rawdata[i:n]) - i = n - break - match = interesting.search(rawdata, i) - if match: j = match.start() - else: j = n - if i < j: - self.handle_data(rawdata[i:j]) - i = j - if i == n: break - if rawdata[i] == '<': - if starttagopen.match(rawdata, i): - if self.literal: - self.handle_data(rawdata[i]) - i = i+1 - continue - k = self.parse_starttag(i) - if k < 0: break - i = k - continue - if rawdata.startswith("</", i): - k = self.parse_endtag(i) - if k < 0: break - i = k - self.literal = 0 - continue - if self.literal: - if n > (i + 1): - self.handle_data("<") - i = i+1 - else: - # incomplete - break - continue - if rawdata.startswith("<!--", i): - # Strictly speaking, a comment is --.*-- - # within a declaration tag <!...>. - # This should be removed, - # and comments handled only in parse_declaration. - k = self.parse_comment(i) - if k < 0: break - i = k - continue - if rawdata.startswith("<?", i): - k = self.parse_pi(i) - if k < 0: break - i = i+k - continue - if rawdata.startswith("<!", i): - # This is some sort of declaration; in "HTML as - # deployed," this should only be the document type - # declaration ("<!DOCTYPE html...>"). - k = self.parse_declaration(i) - if k < 0: break - i = k - continue - elif rawdata[i] == '&': - if self.literal: - self.handle_data(rawdata[i]) - i = i+1 - continue - match = charref.match(rawdata, i) - if match: - name = match.group(1) - self.handle_charref(name) - i = match.end(0) - if rawdata[i-1] != ';': i = i-1 - continue - match = entityref.match(rawdata, i) - if match: - name = match.group(1) - self.handle_entityref(name) - i = match.end(0) - if rawdata[i-1] != ';': i = i-1 - continue - else: - self.error('neither < nor & ??') - # We get here only if incomplete matches but - # nothing else - match = incomplete.match(rawdata, i) - if not match: - self.handle_data(rawdata[i]) - i = i+1 - continue - j = match.end(0) - if j == n: - break # Really incomplete - self.handle_data(rawdata[i:j]) - i = j - # end while - if end and i < n: - self.handle_data(rawdata[i:n]) - i = n - self.rawdata = rawdata[i:] - # XXX if end: check for empty stack - - # Extensions for the DOCTYPE scanner: - _decl_otherchars = '=' - - # Internal -- parse processing instr, return length or -1 if not terminated - def parse_pi(self, i): - rawdata = self.rawdata - if rawdata[i:i+2] != '<?': - self.error('unexpected call to parse_pi()') - match = piclose.search(rawdata, i+2) - if not match: - return -1 - j = match.start(0) - self.handle_pi(rawdata[i+2: j]) - j = match.end(0) - return j-i - - def get_starttag_text(self): - return self.__starttag_text - - # Internal -- handle starttag, return length or -1 if not terminated - def parse_starttag(self, i): - self.__starttag_text = None - start_pos = i - rawdata = self.rawdata - if shorttagopen.match(rawdata, i): - # SGML shorthand: <tag/data/ == <tag>data</tag> - # XXX Can data contain &... (entity or char refs)? - # XXX Can data contain < or > (tag characters)? - # XXX Can there be whitespace before the first /? - match = shorttag.match(rawdata, i) - if not match: - return -1 - tag, data = match.group(1, 2) - self.__starttag_text = '<%s/' % tag - tag = tag.lower() - k = match.end(0) - self.finish_shorttag(tag, data) - self.__starttag_text = rawdata[start_pos:match.end(1) + 1] - return k - # XXX The following should skip matching quotes (' or ") - # As a shortcut way to exit, this isn't so bad, but shouldn't - # be used to locate the actual end of the start tag since the - # < or > characters may be embedded in an attribute value. - match = endbracket.search(rawdata, i+1) - if not match: - return -1 - j = match.start(0) - # Now parse the data between i+1 and j into a tag and attrs - attrs = [] - if rawdata[i:i+2] == '<>': - # SGML shorthand: <> == <last open tag seen> - k = j - tag = self.lasttag - else: - match = tagfind.match(rawdata, i+1) - if not match: - self.error('unexpected call to parse_starttag') - k = match.end(0) - tag = rawdata[i+1:k].lower() - self.lasttag = tag - while k < j: - match = attrfind.match(rawdata, k) - if not match: break - attrname, rest, attrvalue = match.group(1, 2, 3) - if not rest: - attrvalue = attrname - else: - if (attrvalue[:1] == "'" == attrvalue[-1:] or - attrvalue[:1] == '"' == attrvalue[-1:]): - # strip quotes - attrvalue = attrvalue[1:-1] - attrvalue = self.entity_or_charref.sub( - self._convert_ref, attrvalue) - attrs.append((attrname.lower(), attrvalue)) - k = match.end(0) - if rawdata[j] == '>': - j = j+1 - self.__starttag_text = rawdata[start_pos:j] - self.finish_starttag(tag, attrs) - return j - - # Internal -- convert entity or character reference - def _convert_ref(self, match): - if match.group(2): - return self.convert_charref(match.group(2)) or \ - '&#%s%s' % match.groups()[1:] - elif match.group(3): - return self.convert_entityref(match.group(1)) or \ - '&%s;' % match.group(1) - else: - return '&%s' % match.group(1) - - # Internal -- parse endtag - def parse_endtag(self, i): - rawdata = self.rawdata - match = endbracket.search(rawdata, i+1) - if not match: - return -1 - j = match.start(0) - tag = rawdata[i+2:j].strip().lower() - if rawdata[j] == '>': - j = j+1 - self.finish_endtag(tag) - return j - - # Internal -- finish parsing of <tag/data/ (same as <tag>data</tag>) - def finish_shorttag(self, tag, data): - self.finish_starttag(tag, []) - self.handle_data(data) - self.finish_endtag(tag) - - # Internal -- finish processing of start tag - # Return -1 for unknown tag, 0 for open-only tag, 1 for balanced tag - def finish_starttag(self, tag, attrs): - try: - method = getattr(self, 'start_' + tag) - except AttributeError: - try: - method = getattr(self, 'do_' + tag) - except AttributeError: - self.unknown_starttag(tag, attrs) - return -1 - else: - self.handle_starttag(tag, method, attrs) - return 0 - else: - self.stack.append(tag) - self.handle_starttag(tag, method, attrs) - return 1 - - # Internal -- finish processing of end tag - def finish_endtag(self, tag): - if not tag: - found = len(self.stack) - 1 - if found < 0: - self.unknown_endtag(tag) - return - else: - if tag not in self.stack: - try: - method = getattr(self, 'end_' + tag) - except AttributeError: - self.unknown_endtag(tag) - else: - self.report_unbalanced(tag) - return - found = len(self.stack) - for i in range(found): - if self.stack[i] == tag: found = i - while len(self.stack) > found: - tag = self.stack[-1] - try: - method = getattr(self, 'end_' + tag) - except AttributeError: - method = None - if method: - self.handle_endtag(tag, method) - else: - self.unknown_endtag(tag) - del self.stack[-1] - - # Overridable -- handle start tag - def handle_starttag(self, tag, method, attrs): - method(attrs) - - # Overridable -- handle end tag - def handle_endtag(self, tag, method): - method() - - # Example -- report an unbalanced </...> tag. - def report_unbalanced(self, tag): - if self.verbose: - print '*** Unbalanced </' + tag + '>' - print '*** Stack:', self.stack - - def convert_charref(self, name): - """Convert character reference, may be overridden.""" - try: - n = int(name) - except ValueError: - return - if not 0 <= n <= 127: - return - return self.convert_codepoint(n) - - def convert_codepoint(self, codepoint): - return chr(codepoint) - - def handle_charref(self, name): - """Handle character reference, no need to override.""" - replacement = self.convert_charref(name) - if replacement is None: - self.unknown_charref(name) - else: - self.handle_data(replacement) - - # Definition of entities -- derived classes may override - entitydefs = \ - {'lt': '<', 'gt': '>', 'amp': '&', 'quot': '"', 'apos': '\''} - - def convert_entityref(self, name): - """Convert entity references. - - As an alternative to overriding this method; one can tailor the - results by setting up the self.entitydefs mapping appropriately. - """ - table = self.entitydefs - if name in table: - return table[name] - else: - return - - def handle_entityref(self, name): - """Handle entity references, no need to override.""" - replacement = self.convert_entityref(name) - if replacement is None: - self.unknown_entityref(name) - else: - self.handle_data(replacement) - - # Example -- handle data, should be overridden - def handle_data(self, data): - pass - - # Example -- handle comment, could be overridden - def handle_comment(self, data): - pass - - # Example -- handle declaration, could be overridden - def handle_decl(self, decl): - pass - - # Example -- handle processing instruction, could be overridden - def handle_pi(self, data): - pass - - # To be overridden -- handlers for unknown objects - def unknown_starttag(self, tag, attrs): pass - def unknown_endtag(self, tag): pass - def unknown_charref(self, ref): pass - def unknown_entityref(self, ref): pass - - -class TestSGMLParser(SGMLParser): - - def __init__(self, verbose=0): - self.testdata = "" - SGMLParser.__init__(self, verbose) - - def handle_data(self, data): - self.testdata = self.testdata + data - if len(repr(self.testdata)) >= 70: - self.flush() - - def flush(self): - data = self.testdata - if data: - self.testdata = "" - print 'data:', repr(data) - - def handle_comment(self, data): - self.flush() - r = repr(data) - if len(r) > 68: - r = r[:32] + '...' + r[-32:] - print 'comment:', r - - def unknown_starttag(self, tag, attrs): - self.flush() - if not attrs: - print 'start tag: <' + tag + '>' - else: - print 'start tag: <' + tag, - for name, value in attrs: - print name + '=' + '"' + value + '"', - print '>' - - def unknown_endtag(self, tag): - self.flush() - print 'end tag: </' + tag + '>' - - def unknown_entityref(self, ref): - self.flush() - print '*** unknown entity ref: &' + ref + ';' - - def unknown_charref(self, ref): - self.flush() - print '*** unknown char ref: &#' + ref + ';' - - def unknown_decl(self, data): - self.flush() - print '*** unknown decl: [' + data + ']' - - def close(self): - SGMLParser.close(self) - self.flush() - - -def test(args = None): - import sys - - if args is None: - args = sys.argv[1:] - - if args and args[0] == '-s': - args = args[1:] - klass = SGMLParser - else: - klass = TestSGMLParser - - if args: - file = args[0] - else: - file = 'test.html' - - if file == '-': - f = sys.stdin - else: - try: - f = open(file, 'r') - except IOError, msg: - print file, ":", msg - sys.exit(1) - - data = f.read() - if f is not sys.stdin: - f.close() - - x = klass() - for c in data: - x.feed(c) - x.close() - - -if __name__ == '__main__': - test() diff --git a/plugin.video.alfa/lib/mechanize/_sockettimeout.py b/plugin.video.alfa/lib/mechanize/_sockettimeout.py deleted file mode 100755 index 20988408..00000000 --- a/plugin.video.alfa/lib/mechanize/_sockettimeout.py +++ /dev/null @@ -1,6 +0,0 @@ -import socket - -try: - _GLOBAL_DEFAULT_TIMEOUT = socket._GLOBAL_DEFAULT_TIMEOUT -except AttributeError: - _GLOBAL_DEFAULT_TIMEOUT = object() diff --git a/plugin.video.alfa/lib/mechanize/_testcase.py b/plugin.video.alfa/lib/mechanize/_testcase.py deleted file mode 100755 index 905239d5..00000000 --- a/plugin.video.alfa/lib/mechanize/_testcase.py +++ /dev/null @@ -1,162 +0,0 @@ -import os -import shutil -import subprocess -import tempfile -import unittest - - -class SetupStack(object): - - def __init__(self): - self._on_teardown = [] - - def add_teardown(self, teardown): - self._on_teardown.append(teardown) - - def tear_down(self): - for func in reversed(self._on_teardown): - func() - - -class TearDownConvenience(object): - - def __init__(self, setup_stack=None): - self._own_setup_stack = setup_stack is None - if setup_stack is None: - setup_stack = SetupStack() - self._setup_stack = setup_stack - - # only call this convenience method if no setup_stack was supplied to c'tor - def tear_down(self): - assert self._own_setup_stack - self._setup_stack.tear_down() - - -class TempDirMaker(TearDownConvenience): - - def make_temp_dir(self, dir_=None): - temp_dir = tempfile.mkdtemp(prefix="tmp-%s-" % self.__class__.__name__, - dir=dir_) - def tear_down(): - shutil.rmtree(temp_dir) - self._setup_stack.add_teardown(tear_down) - return temp_dir - - -class MonkeyPatcher(TearDownConvenience): - - Unset = object() - - def monkey_patch(self, obj, name, value): - orig_value = getattr(obj, name) - setattr(obj, name, value) - def reverse_patch(): - setattr(obj, name, orig_value) - self._setup_stack.add_teardown(reverse_patch) - - def _set_environ(self, env, name, value): - if value is self.Unset: - try: - del env[name] - except KeyError: - pass - else: - env[name] = value - - def monkey_patch_environ(self, name, value, env=os.environ): - orig_value = env.get(name, self.Unset) - self._set_environ(env, name, value) - def reverse_patch(): - self._set_environ(env, name, orig_value) - self._setup_stack.add_teardown(reverse_patch) - - -class FixtureFactory(object): - - def __init__(self): - self._setup_stack = SetupStack() - self._context_managers = {} - self._fixtures = {} - - def register_context_manager(self, name, context_manager): - self._context_managers[name] = context_manager - - def get_fixture(self, name, add_teardown): - context_manager = self._context_managers[name] - fixture = context_manager.__enter__() - add_teardown(lambda: context_manager.__exit__(None, None, None)) - return fixture - - def get_cached_fixture(self, name): - fixture = self._fixtures.get(name) - if fixture is None: - fixture = self.get_fixture(name, self._setup_stack.add_teardown) - self._fixtures[name] = fixture - return fixture - - def tear_down(self): - self._setup_stack.tear_down() - - -class TestCase(unittest.TestCase): - - def setUp(self): - self._setup_stack = SetupStack() - self._monkey_patcher = MonkeyPatcher(self._setup_stack) - - def tearDown(self): - self._setup_stack.tear_down() - - def register_context_manager(self, name, context_manager): - return self.fixture_factory.register_context_manager( - name, context_manager) - - def get_fixture(self, name): - return self.fixture_factory.get_fixture(name, self.add_teardown) - - def get_cached_fixture(self, name): - return self.fixture_factory.get_cached_fixture(name) - - def add_teardown(self, *args, **kwds): - self._setup_stack.add_teardown(*args, **kwds) - - def make_temp_dir(self, *args, **kwds): - return TempDirMaker(self._setup_stack).make_temp_dir(*args, **kwds) - - def monkey_patch(self, *args, **kwds): - return self._monkey_patcher.monkey_patch(*args, **kwds) - - def monkey_patch_environ(self, *args, **kwds): - return self._monkey_patcher.monkey_patch_environ(*args, **kwds) - - def assert_contains(self, container, containee): - self.assertTrue(containee in container, "%r not in %r" % - (containee, container)) - - def assert_less_than(self, got, expected): - self.assertTrue(got < expected, "%r >= %r" % - (got, expected)) - - -# http://lackingrhoticity.blogspot.com/2009/01/testing-using-golden-files-in-python.html - -class GoldenTestCase(TestCase): - - run_meld = False - - def assert_golden(self, dir_got, dir_expect): - assert os.path.exists(dir_expect), dir_expect - proc = subprocess.Popen(["diff", "--recursive", "-u", "-N", - "--exclude=.*", dir_expect, dir_got], - stdout=subprocess.PIPE) - stdout, stderr = proc.communicate() - if len(stdout) > 0: - if self.run_meld: - # Put expected output on the right because that is the - # side we usually edit. - subprocess.call(["meld", dir_got, dir_expect]) - raise AssertionError( - "Differences from golden files found.\n" - "Try running with --meld to update golden files.\n" - "%s" % stdout) - self.assertEquals(proc.wait(), 0) diff --git a/plugin.video.alfa/lib/mechanize/_urllib2.py b/plugin.video.alfa/lib/mechanize/_urllib2.py deleted file mode 100755 index 151b238b..00000000 --- a/plugin.video.alfa/lib/mechanize/_urllib2.py +++ /dev/null @@ -1,50 +0,0 @@ -# urllib2 work-alike interface -# ...from urllib2... -from urllib2 import \ - URLError, \ - HTTPError -# ...and from mechanize -from _auth import \ - HTTPProxyPasswordMgr, \ - HTTPSClientCertMgr -from _debug import \ - HTTPResponseDebugProcessor, \ - HTTPRedirectDebugProcessor -# crap ATM -## from _gzip import \ -## HTTPGzipProcessor -from _urllib2_fork import \ - AbstractBasicAuthHandler, \ - AbstractDigestAuthHandler, \ - BaseHandler, \ - CacheFTPHandler, \ - FileHandler, \ - FTPHandler, \ - HTTPBasicAuthHandler, \ - HTTPCookieProcessor, \ - HTTPDefaultErrorHandler, \ - HTTPDigestAuthHandler, \ - HTTPErrorProcessor, \ - HTTPHandler, \ - HTTPPasswordMgr, \ - HTTPPasswordMgrWithDefaultRealm, \ - HTTPRedirectHandler, \ - ProxyBasicAuthHandler, \ - ProxyDigestAuthHandler, \ - ProxyHandler, \ - UnknownHandler -from _http import \ - HTTPEquivProcessor, \ - HTTPRefererProcessor, \ - HTTPRefreshProcessor, \ - HTTPRobotRulesProcessor, \ - RobotExclusionError -import httplib -if hasattr(httplib, 'HTTPS'): - from _urllib2_fork import HTTPSHandler -del httplib -from _opener import OpenerDirector, \ - SeekableResponseOpener, \ - build_opener, install_opener, urlopen -from _request import \ - Request diff --git a/plugin.video.alfa/lib/mechanize/_urllib2_fork.py b/plugin.video.alfa/lib/mechanize/_urllib2_fork.py deleted file mode 100755 index 726dc035..00000000 --- a/plugin.video.alfa/lib/mechanize/_urllib2_fork.py +++ /dev/null @@ -1,1414 +0,0 @@ -"""Fork of urllib2. - -When reading this, don't assume that all code in here is reachable. Code in -the rest of mechanize may be used instead. - -Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009 Python -Software Foundation; All Rights Reserved - -Copyright 2002-2009 John J Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). - -""" - -# XXX issues: -# If an authentication error handler that tries to perform -# authentication for some reason but fails, how should the error be -# signalled? The client needs to know the HTTP error code. But if -# the handler knows that the problem was, e.g., that it didn't know -# that hash algo that requested in the challenge, it would be good to -# pass that information along to the client, too. -# ftp errors aren't handled cleanly -# check digest against correct (i.e. non-apache) implementation - -# Possible extensions: -# complex proxies XXX not sure what exactly was meant by this -# abstract factory for opener - -import copy -import base64 -import httplib -import mimetools -import logging -import os -import posixpath -import random -import re -import socket -import sys -import time -import urllib -import urlparse -import bisect - -try: - from cStringIO import StringIO -except ImportError: - from StringIO import StringIO - -try: - import hashlib -except ImportError: - # python 2.4 - import md5 - import sha - def sha1_digest(bytes): - return sha.new(bytes).hexdigest() - def md5_digest(bytes): - return md5.new(bytes).hexdigest() -else: - def sha1_digest(bytes): - return hashlib.sha1(bytes).hexdigest() - def md5_digest(bytes): - return hashlib.md5(bytes).hexdigest() - - -try: - socket._fileobject("fake socket", close=True) -except TypeError: - # python <= 2.4 - create_readline_wrapper = socket._fileobject -else: - def create_readline_wrapper(fh): - return socket._fileobject(fh, close=True) - - -# python 2.4 splithost has a bug in empty path component case -_hostprog = None -def splithost(url): - """splithost('//host[:port]/path') --> 'host[:port]', '/path'.""" - global _hostprog - if _hostprog is None: - import re - _hostprog = re.compile('^//([^/?]*)(.*)$') - - match = _hostprog.match(url) - if match: return match.group(1, 2) - return None, url - - -from urllib import (unwrap, unquote, splittype, quote, - addinfourl, splitport, - splitattr, ftpwrapper, splituser, splitpasswd, splitvalue) - -# support for FileHandler, proxies via environment variables -from urllib import localhost, url2pathname, getproxies - -from urllib2 import HTTPError, URLError - -import _request -import _rfc3986 -import _sockettimeout - -from _clientcookie import CookieJar -from _response import closeable_response - - -# used in User-Agent header sent -__version__ = sys.version[:3] - -_opener = None -def urlopen(url, data=None, timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - global _opener - if _opener is None: - _opener = build_opener() - return _opener.open(url, data, timeout) - -def install_opener(opener): - global _opener - _opener = opener - -# copied from cookielib.py -_cut_port_re = re.compile(r":\d+$") -def request_host(request): - """Return request-host, as defined by RFC 2965. - - Variation from RFC: returned value is lowercased, for convenient - comparison. - - """ - url = request.get_full_url() - host = urlparse.urlparse(url)[1] - if host == "": - host = request.get_header("Host", "") - - # remove port, if present - host = _cut_port_re.sub("", host, 1) - return host.lower() - -class Request: - - def __init__(self, url, data=None, headers={}, - origin_req_host=None, unverifiable=False): - # unwrap('<URL:type://host/path>') --> 'type://host/path' - self.__original = unwrap(url) - self.type = None - # self.__r_type is what's left after doing the splittype - self.host = None - self.port = None - self._tunnel_host = None - self.data = data - self.headers = {} - for key, value in headers.items(): - self.add_header(key, value) - self.unredirected_hdrs = {} - if origin_req_host is None: - origin_req_host = request_host(self) - self.origin_req_host = origin_req_host - self.unverifiable = unverifiable - - def __getattr__(self, attr): - # XXX this is a fallback mechanism to guard against these - # methods getting called in a non-standard order. this may be - # too complicated and/or unnecessary. - # XXX should the __r_XXX attributes be public? - if attr[:12] == '_Request__r_': - name = attr[12:] - if hasattr(Request, 'get_' + name): - getattr(self, 'get_' + name)() - return getattr(self, attr) - raise AttributeError, attr - - def get_method(self): - if self.has_data(): - return "POST" - else: - return "GET" - - # XXX these helper methods are lame - - def add_data(self, data): - self.data = data - - def has_data(self): - return self.data is not None - - def get_data(self): - return self.data - - def get_full_url(self): - return self.__original - - def get_type(self): - if self.type is None: - self.type, self.__r_type = splittype(self.__original) - if self.type is None: - raise ValueError, "unknown url type: %s" % self.__original - return self.type - - def get_host(self): - if self.host is None: - self.host, self.__r_host = splithost(self.__r_type) - if self.host: - self.host = unquote(self.host) - return self.host - - def get_selector(self): - scheme, authority, path, query, fragment = _rfc3986.urlsplit( - self.__r_host) - if path == "": - path = "/" # RFC 2616, section 3.2.2 - fragment = None # RFC 3986, section 3.5 - return _rfc3986.urlunsplit([scheme, authority, path, query, fragment]) - - def set_proxy(self, host, type): - orig_host = self.get_host() - if self.get_type() == 'https' and not self._tunnel_host: - self._tunnel_host = orig_host - else: - self.type = type - self.__r_host = self.__original - - self.host = host - - def has_proxy(self): - """Private method.""" - # has non-HTTPS proxy - return self.__r_host == self.__original - - def get_origin_req_host(self): - return self.origin_req_host - - def is_unverifiable(self): - return self.unverifiable - - def add_header(self, key, val): - # useful for something like authentication - self.headers[key.capitalize()] = val - - def add_unredirected_header(self, key, val): - # will not be added to a redirected request - self.unredirected_hdrs[key.capitalize()] = val - - def has_header(self, header_name): - return (header_name in self.headers or - header_name in self.unredirected_hdrs) - - def get_header(self, header_name, default=None): - return self.headers.get( - header_name, - self.unredirected_hdrs.get(header_name, default)) - - def header_items(self): - hdrs = self.unredirected_hdrs.copy() - hdrs.update(self.headers) - return hdrs.items() - -class OpenerDirector: - def __init__(self): - client_version = "Python-urllib/%s" % __version__ - self.addheaders = [('User-agent', client_version)] - # manage the individual handlers - self.handlers = [] - self.handle_open = {} - self.handle_error = {} - self.process_response = {} - self.process_request = {} - - def add_handler(self, handler): - if not hasattr(handler, "add_parent"): - raise TypeError("expected BaseHandler instance, got %r" % - type(handler)) - - added = False - for meth in dir(handler): - if meth in ["redirect_request", "do_open", "proxy_open"]: - # oops, coincidental match - continue - - i = meth.find("_") - protocol = meth[:i] - condition = meth[i+1:] - - if condition.startswith("error"): - j = condition.find("_") + i + 1 - kind = meth[j+1:] - try: - kind = int(kind) - except ValueError: - pass - lookup = self.handle_error.get(protocol, {}) - self.handle_error[protocol] = lookup - elif condition == "open": - kind = protocol - lookup = self.handle_open - elif condition == "response": - kind = protocol - lookup = self.process_response - elif condition == "request": - kind = protocol - lookup = self.process_request - else: - continue - - handlers = lookup.setdefault(kind, []) - if handlers: - bisect.insort(handlers, handler) - else: - handlers.append(handler) - added = True - - if added: - # the handlers must work in an specific order, the order - # is specified in a Handler attribute - bisect.insort(self.handlers, handler) - handler.add_parent(self) - - def close(self): - # Only exists for backwards compatibility. - pass - - def _call_chain(self, chain, kind, meth_name, *args): - # Handlers raise an exception if no one else should try to handle - # the request, or return None if they can't but another handler - # could. Otherwise, they return the response. - handlers = chain.get(kind, ()) - for handler in handlers: - func = getattr(handler, meth_name) - - result = func(*args) - if result is not None: - return result - - def _open(self, req, data=None): - result = self._call_chain(self.handle_open, 'default', - 'default_open', req) - if result: - return result - - protocol = req.get_type() - result = self._call_chain(self.handle_open, protocol, protocol + - '_open', req) - if result: - return result - - return self._call_chain(self.handle_open, 'unknown', - 'unknown_open', req) - - def error(self, proto, *args): - if proto in ('http', 'https'): - # XXX http[s] protocols are special-cased - dict = self.handle_error['http'] # https is not different than http - proto = args[2] # YUCK! - meth_name = 'http_error_%s' % proto - http_err = 1 - orig_args = args - else: - dict = self.handle_error - meth_name = proto + '_error' - http_err = 0 - args = (dict, proto, meth_name) + args - result = self._call_chain(*args) - if result: - return result - - if http_err: - args = (dict, 'default', 'http_error_default') + orig_args - return self._call_chain(*args) - -# XXX probably also want an abstract factory that knows when it makes -# sense to skip a superclass in favor of a subclass and when it might -# make sense to include both - -def build_opener(*handlers): - """Create an opener object from a list of handlers. - - The opener will use several default handlers, including support - for HTTP, FTP and when applicable, HTTPS. - - If any of the handlers passed as arguments are subclasses of the - default handlers, the default handlers will not be used. - """ - import types - def isclass(obj): - return isinstance(obj, (types.ClassType, type)) - - opener = OpenerDirector() - default_classes = [ProxyHandler, UnknownHandler, HTTPHandler, - HTTPDefaultErrorHandler, HTTPRedirectHandler, - FTPHandler, FileHandler, HTTPErrorProcessor] - if hasattr(httplib, 'HTTPS'): - default_classes.append(HTTPSHandler) - skip = set() - for klass in default_classes: - for check in handlers: - if isclass(check): - if issubclass(check, klass): - skip.add(klass) - elif isinstance(check, klass): - skip.add(klass) - for klass in skip: - default_classes.remove(klass) - - for klass in default_classes: - opener.add_handler(klass()) - - for h in handlers: - if isclass(h): - h = h() - opener.add_handler(h) - return opener - -class BaseHandler: - handler_order = 500 - - def add_parent(self, parent): - self.parent = parent - - def close(self): - # Only exists for backwards compatibility - pass - - def __lt__(self, other): - if not hasattr(other, "handler_order"): - # Try to preserve the old behavior of having custom classes - # inserted after default ones (works only for custom user - # classes which are not aware of handler_order). - return True - return self.handler_order < other.handler_order - - -class HTTPErrorProcessor(BaseHandler): - """Process HTTP error responses. - - The purpose of this handler is to to allow other response processors a - look-in by removing the call to parent.error() from - AbstractHTTPHandler. - - For non-2xx error codes, this just passes the job on to the - Handler.<proto>_error_<code> methods, via the OpenerDirector.error method. - Eventually, HTTPDefaultErrorHandler will raise an HTTPError if no other - handler handles the error. - - """ - handler_order = 1000 # after all other processors - - def http_response(self, request, response): - code, msg, hdrs = response.code, response.msg, response.info() - - # According to RFC 2616, "2xx" code indicates that the client's - # request was successfully received, understood, and accepted. - if not (200 <= code < 300): - # hardcoded http is NOT a bug - response = self.parent.error( - 'http', request, response, code, msg, hdrs) - - return response - - https_response = http_response - -class HTTPDefaultErrorHandler(BaseHandler): - def http_error_default(self, req, fp, code, msg, hdrs): - # why these error methods took the code, msg, headers args in the first - # place rather than a response object, I don't know, but to avoid - # multiple wrapping, we're discarding them - - if isinstance(fp, HTTPError): - response = fp - else: - response = HTTPError( - req.get_full_url(), code, msg, hdrs, fp) - assert code == response.code - assert msg == response.msg - assert hdrs == response.hdrs - raise response - -class HTTPRedirectHandler(BaseHandler): - # maximum number of redirections to any single URL - # this is needed because of the state that cookies introduce - max_repeats = 4 - # maximum total number of redirections (regardless of URL) before - # assuming we're in a loop - max_redirections = 10 - - # Implementation notes: - - # To avoid the server sending us into an infinite loop, the request - # object needs to track what URLs we have already seen. Do this by - # adding a handler-specific attribute to the Request object. The value - # of the dict is used to count the number of times the same URL has - # been visited. This is needed because visiting the same URL twice - # does not necessarily imply a loop, thanks to state introduced by - # cookies. - - # Always unhandled redirection codes: - # 300 Multiple Choices: should not handle this here. - # 304 Not Modified: no need to handle here: only of interest to caches - # that do conditional GETs - # 305 Use Proxy: probably not worth dealing with here - # 306 Unused: what was this for in the previous versions of protocol?? - - def redirect_request(self, req, fp, code, msg, headers, newurl): - """Return a Request or None in response to a redirect. - - This is called by the http_error_30x methods when a - redirection response is received. If a redirection should - take place, return a new Request to allow http_error_30x to - perform the redirect. Otherwise, raise HTTPError if no-one - else should try to handle this url. Return None if you can't - but another Handler might. - """ - m = req.get_method() - if (code in (301, 302, 303, 307, "refresh") and m in ("GET", "HEAD") - or code in (301, 302, 303, "refresh") and m == "POST"): - # Strictly (according to RFC 2616), 301 or 302 in response - # to a POST MUST NOT cause a redirection without confirmation - # from the user (of urllib2, in this case). In practice, - # essentially all clients do redirect in this case, so we do - # the same. - # TODO: really refresh redirections should be visiting; tricky to fix - new = _request.Request( - newurl, - headers=req.headers, - origin_req_host=req.get_origin_req_host(), - unverifiable=True, - visit=False, - timeout=req.timeout) - new._origin_req = getattr(req, "_origin_req", req) - return new - else: - raise HTTPError(req.get_full_url(), code, msg, headers, fp) - - def http_error_302(self, req, fp, code, msg, headers): - # Some servers (incorrectly) return multiple Location headers - # (so probably same goes for URI). Use first header. - if 'location' in headers: - newurl = headers.getheaders('location')[0] - elif 'uri' in headers: - newurl = headers.getheaders('uri')[0] - else: - return - newurl = _rfc3986.clean_url(newurl, "latin-1") - newurl = _rfc3986.urljoin(req.get_full_url(), newurl) - - # XXX Probably want to forget about the state of the current - # request, although that might interact poorly with other - # handlers that also use handler-specific request attributes - new = self.redirect_request(req, fp, code, msg, headers, newurl) - if new is None: - return - - # loop detection - # .redirect_dict has a key url if url was previously visited. - if hasattr(req, 'redirect_dict'): - visited = new.redirect_dict = req.redirect_dict - if (visited.get(newurl, 0) >= self.max_repeats or - len(visited) >= self.max_redirections): - raise HTTPError(req.get_full_url(), code, - self.inf_msg + msg, headers, fp) - else: - visited = new.redirect_dict = req.redirect_dict = {} - visited[newurl] = visited.get(newurl, 0) + 1 - - # Don't close the fp until we are sure that we won't use it - # with HTTPError. - fp.read() - fp.close() - - return self.parent.open(new) - - http_error_301 = http_error_303 = http_error_307 = http_error_302 - http_error_refresh = http_error_302 - - inf_msg = "The HTTP server returned a redirect error that would " \ - "lead to an infinite loop.\n" \ - "The last 30x error message was:\n" - - -def _parse_proxy(proxy): - """Return (scheme, user, password, host/port) given a URL or an authority. - - If a URL is supplied, it must have an authority (host:port) component. - According to RFC 3986, having an authority component means the URL must - have two slashes after the scheme: - - >>> _parse_proxy('file:/ftp.example.com/') - Traceback (most recent call last): - ValueError: proxy URL with no authority: 'file:/ftp.example.com/' - - The first three items of the returned tuple may be None. - - Examples of authority parsing: - - >>> _parse_proxy('proxy.example.com') - (None, None, None, 'proxy.example.com') - >>> _parse_proxy('proxy.example.com:3128') - (None, None, None, 'proxy.example.com:3128') - - The authority component may optionally include userinfo (assumed to be - username:password): - - >>> _parse_proxy('joe:password@proxy.example.com') - (None, 'joe', 'password', 'proxy.example.com') - >>> _parse_proxy('joe:password@proxy.example.com:3128') - (None, 'joe', 'password', 'proxy.example.com:3128') - - Same examples, but with URLs instead: - - >>> _parse_proxy('http://proxy.example.com/') - ('http', None, None, 'proxy.example.com') - >>> _parse_proxy('http://proxy.example.com:3128/') - ('http', None, None, 'proxy.example.com:3128') - >>> _parse_proxy('http://joe:password@proxy.example.com/') - ('http', 'joe', 'password', 'proxy.example.com') - >>> _parse_proxy('http://joe:password@proxy.example.com:3128') - ('http', 'joe', 'password', 'proxy.example.com:3128') - - Everything after the authority is ignored: - - >>> _parse_proxy('ftp://joe:password@proxy.example.com/rubbish:3128') - ('ftp', 'joe', 'password', 'proxy.example.com') - - Test for no trailing '/' case: - - >>> _parse_proxy('http://joe:password@proxy.example.com') - ('http', 'joe', 'password', 'proxy.example.com') - - """ - scheme, r_scheme = splittype(proxy) - if not r_scheme.startswith("/"): - # authority - scheme = None - authority = proxy - else: - # URL - if not r_scheme.startswith("//"): - raise ValueError("proxy URL with no authority: %r" % proxy) - # We have an authority, so for RFC 3986-compliant URLs (by ss 3. - # and 3.3.), path is empty or starts with '/' - end = r_scheme.find("/", 2) - if end == -1: - end = None - authority = r_scheme[2:end] - userinfo, hostport = splituser(authority) - if userinfo is not None: - user, password = splitpasswd(userinfo) - else: - user = password = None - return scheme, user, password, hostport - -class ProxyHandler(BaseHandler): - # Proxies must be in front - handler_order = 100 - - def __init__(self, proxies=None, proxy_bypass=None): - if proxies is None: - proxies = getproxies() - - assert hasattr(proxies, 'has_key'), "proxies must be a mapping" - self.proxies = proxies - for type, url in proxies.items(): - setattr(self, '%s_open' % type, - lambda r, proxy=url, type=type, meth=self.proxy_open: \ - meth(r, proxy, type)) - if proxy_bypass is None: - proxy_bypass = urllib.proxy_bypass - self._proxy_bypass = proxy_bypass - - def proxy_open(self, req, proxy, type): - orig_type = req.get_type() - proxy_type, user, password, hostport = _parse_proxy(proxy) - - if proxy_type is None: - proxy_type = orig_type - - if req.get_host() and self._proxy_bypass(req.get_host()): - return None - - if user and password: - user_pass = '%s:%s' % (unquote(user), unquote(password)) - creds = base64.b64encode(user_pass).strip() - req.add_header('Proxy-authorization', 'Basic ' + creds) - hostport = unquote(hostport) - req.set_proxy(hostport, proxy_type) - if orig_type == proxy_type or orig_type == 'https': - # let other handlers take care of it - return None - else: - # need to start over, because the other handlers don't - # grok the proxy's URL type - # e.g. if we have a constructor arg proxies like so: - # {'http': 'ftp://proxy.example.com'}, we may end up turning - # a request for http://acme.example.com/a into one for - # ftp://proxy.example.com/a - return self.parent.open(req) - - -class HTTPPasswordMgr: - - def __init__(self): - self.passwd = {} - - def add_password(self, realm, uri, user, passwd): - # uri could be a single URI or a sequence - if isinstance(uri, basestring): - uri = [uri] - if not realm in self.passwd: - self.passwd[realm] = {} - for default_port in True, False: - reduced_uri = tuple( - [self.reduce_uri(u, default_port) for u in uri]) - self.passwd[realm][reduced_uri] = (user, passwd) - - def find_user_password(self, realm, authuri): - domains = self.passwd.get(realm, {}) - for default_port in True, False: - reduced_authuri = self.reduce_uri(authuri, default_port) - for uris, authinfo in domains.iteritems(): - for uri in uris: - if self.is_suburi(uri, reduced_authuri): - return authinfo - return None, None - - def reduce_uri(self, uri, default_port=True): - """Accept authority or URI and extract only the authority and path.""" - # note HTTP URLs do not have a userinfo component - parts = urlparse.urlsplit(uri) - if parts[1]: - # URI - scheme = parts[0] - authority = parts[1] - path = parts[2] or '/' - else: - # host or host:port - scheme = None - authority = uri - path = '/' - host, port = splitport(authority) - if default_port and port is None and scheme is not None: - dport = {"http": 80, - "https": 443, - }.get(scheme) - if dport is not None: - authority = "%s:%d" % (host, dport) - return authority, path - - def is_suburi(self, base, test): - """Check if test is below base in a URI tree - - Both args must be URIs in reduced form. - """ - if base == test: - return True - if base[0] != test[0]: - return False - common = posixpath.commonprefix((base[1], test[1])) - if len(common) == len(base[1]): - return True - return False - - -class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr): - - def find_user_password(self, realm, authuri): - user, password = HTTPPasswordMgr.find_user_password(self, realm, - authuri) - if user is not None: - return user, password - return HTTPPasswordMgr.find_user_password(self, None, authuri) - - -class AbstractBasicAuthHandler: - - # XXX this allows for multiple auth-schemes, but will stupidly pick - # the last one with a realm specified. - - # allow for double- and single-quoted realm values - # (single quotes are a violation of the RFC, but appear in the wild) - rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' - 'realm=(["\'])(.*?)\\2', re.I) - - # XXX could pre-emptively send auth info already accepted (RFC 2617, - # end of section 2, and section 1.2 immediately after "credentials" - # production). - - def __init__(self, password_mgr=None): - if password_mgr is None: - password_mgr = HTTPPasswordMgr() - self.passwd = password_mgr - self.add_password = self.passwd.add_password - - def http_error_auth_reqed(self, authreq, host, req, headers): - # host may be an authority (without userinfo) or a URL with an - # authority - # XXX could be multiple headers - authreq = headers.get(authreq, None) - if authreq: - mo = AbstractBasicAuthHandler.rx.search(authreq) - if mo: - scheme, quote, realm = mo.groups() - if scheme.lower() == 'basic': - return self.retry_http_basic_auth(host, req, realm) - - def retry_http_basic_auth(self, host, req, realm): - user, pw = self.passwd.find_user_password(realm, host) - if pw is not None: - raw = "%s:%s" % (user, pw) - auth = 'Basic %s' % base64.b64encode(raw).strip() - if req.headers.get(self.auth_header, None) == auth: - return None - newreq = copy.copy(req) - newreq.add_header(self.auth_header, auth) - newreq.visit = False - return self.parent.open(newreq) - else: - return None - - -class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): - - auth_header = 'Authorization' - - def http_error_401(self, req, fp, code, msg, headers): - url = req.get_full_url() - return self.http_error_auth_reqed('www-authenticate', - url, req, headers) - - -class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): - - auth_header = 'Proxy-authorization' - - def http_error_407(self, req, fp, code, msg, headers): - # http_error_auth_reqed requires that there is no userinfo component in - # authority. Assume there isn't one, since urllib2 does not (and - # should not, RFC 3986 s. 3.2.1) support requests for URLs containing - # userinfo. - authority = req.get_host() - return self.http_error_auth_reqed('proxy-authenticate', - authority, req, headers) - - -def randombytes(n): - """Return n random bytes.""" - # Use /dev/urandom if it is available. Fall back to random module - # if not. It might be worthwhile to extend this function to use - # other platform-specific mechanisms for getting random bytes. - if os.path.exists("/dev/urandom"): - f = open("/dev/urandom") - s = f.read(n) - f.close() - return s - else: - L = [chr(random.randrange(0, 256)) for i in range(n)] - return "".join(L) - -class AbstractDigestAuthHandler: - # Digest authentication is specified in RFC 2617. - - # XXX The client does not inspect the Authentication-Info header - # in a successful response. - - # XXX It should be possible to test this implementation against - # a mock server that just generates a static set of challenges. - - # XXX qop="auth-int" supports is shaky - - def __init__(self, passwd=None): - if passwd is None: - passwd = HTTPPasswordMgr() - self.passwd = passwd - self.add_password = self.passwd.add_password - self.retried = 0 - self.nonce_count = 0 - self.last_nonce = None - - def reset_retry_count(self): - self.retried = 0 - - def http_error_auth_reqed(self, auth_header, host, req, headers): - authreq = headers.get(auth_header, None) - if self.retried > 5: - # Don't fail endlessly - if we failed once, we'll probably - # fail a second time. Hm. Unless the Password Manager is - # prompting for the information. Crap. This isn't great - # but it's better than the current 'repeat until recursion - # depth exceeded' approach <wink> - raise HTTPError(req.get_full_url(), 401, "digest auth failed", - headers, None) - else: - self.retried += 1 - if authreq: - scheme = authreq.split()[0] - if scheme.lower() == 'digest': - return self.retry_http_digest_auth(req, authreq) - - def retry_http_digest_auth(self, req, auth): - token, challenge = auth.split(' ', 1) - chal = parse_keqv_list(parse_http_list(challenge)) - auth = self.get_authorization(req, chal) - if auth: - auth_val = 'Digest %s' % auth - if req.headers.get(self.auth_header, None) == auth_val: - return None - newreq = copy.copy(req) - newreq.add_unredirected_header(self.auth_header, auth_val) - newreq.visit = False - return self.parent.open(newreq) - - def get_cnonce(self, nonce): - # The cnonce-value is an opaque - # quoted string value provided by the client and used by both client - # and server to avoid chosen plaintext attacks, to provide mutual - # authentication, and to provide some message integrity protection. - # This isn't a fabulous effort, but it's probably Good Enough. - dig = sha1_digest("%s:%s:%s:%s" % (self.nonce_count, nonce, - time.ctime(), randombytes(8))) - return dig[:16] - - def get_authorization(self, req, chal): - try: - realm = chal['realm'] - nonce = chal['nonce'] - qop = chal.get('qop') - algorithm = chal.get('algorithm', 'MD5') - # mod_digest doesn't send an opaque, even though it isn't - # supposed to be optional - opaque = chal.get('opaque', None) - except KeyError: - return None - - H, KD = self.get_algorithm_impls(algorithm) - if H is None: - return None - - user, pw = self.passwd.find_user_password(realm, req.get_full_url()) - if user is None: - return None - - # XXX not implemented yet - if req.has_data(): - entdig = self.get_entity_digest(req.get_data(), chal) - else: - entdig = None - - A1 = "%s:%s:%s" % (user, realm, pw) - A2 = "%s:%s" % (req.get_method(), - # XXX selector: what about proxies and full urls - req.get_selector()) - if qop == 'auth': - if nonce == self.last_nonce: - self.nonce_count += 1 - else: - self.nonce_count = 1 - self.last_nonce = nonce - - ncvalue = '%08x' % self.nonce_count - cnonce = self.get_cnonce(nonce) - noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2)) - respdig = KD(H(A1), noncebit) - elif qop is None: - respdig = KD(H(A1), "%s:%s" % (nonce, H(A2))) - else: - # XXX handle auth-int. - logger = logging.getLogger("mechanize.auth") - logger.info("digest auth auth-int qop is not supported, not " - "handling digest authentication") - return None - - # XXX should the partial digests be encoded too? - - base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ - 'response="%s"' % (user, realm, nonce, req.get_selector(), - respdig) - if opaque: - base += ', opaque="%s"' % opaque - if entdig: - base += ', digest="%s"' % entdig - base += ', algorithm="%s"' % algorithm - if qop: - base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce) - return base - - def get_algorithm_impls(self, algorithm): - # algorithm should be case-insensitive according to RFC2617 - algorithm = algorithm.upper() - if algorithm == 'MD5': - H = md5_digest - elif algorithm == 'SHA': - H = sha1_digest - # XXX MD5-sess - KD = lambda s, d: H("%s:%s" % (s, d)) - return H, KD - - def get_entity_digest(self, data, chal): - # XXX not implemented yet - return None - - -class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): - """An authentication protocol defined by RFC 2069 - - Digest authentication improves on basic authentication because it - does not transmit passwords in the clear. - """ - - auth_header = 'Authorization' - handler_order = 490 # before Basic auth - - def http_error_401(self, req, fp, code, msg, headers): - host = urlparse.urlparse(req.get_full_url())[1] - retry = self.http_error_auth_reqed('www-authenticate', - host, req, headers) - self.reset_retry_count() - return retry - - -class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): - - auth_header = 'Proxy-Authorization' - handler_order = 490 # before Basic auth - - def http_error_407(self, req, fp, code, msg, headers): - host = req.get_host() - retry = self.http_error_auth_reqed('proxy-authenticate', - host, req, headers) - self.reset_retry_count() - return retry - -class AbstractHTTPHandler(BaseHandler): - - def __init__(self, debuglevel=0): - self._debuglevel = debuglevel - - def set_http_debuglevel(self, level): - self._debuglevel = level - - def do_request_(self, request): - host = request.get_host() - if not host: - raise URLError('no host given') - - if request.has_data(): # POST - data = request.get_data() - if not request.has_header('Content-type'): - request.add_unredirected_header( - 'Content-type', - 'application/x-www-form-urlencoded') - if not request.has_header('Content-length'): - request.add_unredirected_header( - 'Content-length', '%d' % len(data)) - - sel_host = host - if request.has_proxy(): - scheme, sel = splittype(request.get_selector()) - sel_host, sel_path = splithost(sel) - - if not request.has_header('Host'): - request.add_unredirected_header('Host', sel_host) - for name, value in self.parent.addheaders: - name = name.capitalize() - if not request.has_header(name): - request.add_unredirected_header(name, value) - - return request - - def do_open(self, http_class, req): - """Return an addinfourl object for the request, using http_class. - - http_class must implement the HTTPConnection API from httplib. - The addinfourl return value is a file-like object. It also - has methods and attributes including: - - info(): return a mimetools.Message object for the headers - - geturl(): return the original request URL - - code: HTTP status code - """ - host_port = req.get_host() - if not host_port: - raise URLError('no host given') - - try: - h = http_class(host_port, timeout=req.timeout) - except TypeError: - # Python < 2.6, no per-connection timeout support - h = http_class(host_port) - h.set_debuglevel(self._debuglevel) - - headers = dict(req.headers) - headers.update(req.unredirected_hdrs) - # We want to make an HTTP/1.1 request, but the addinfourl - # class isn't prepared to deal with a persistent connection. - # It will try to read all remaining data from the socket, - # which will block while the server waits for the next request. - # So make sure the connection gets closed after the (only) - # request. - headers["Connection"] = "close" - headers = dict( - (name.title(), val) for name, val in headers.items()) - - if req._tunnel_host: - if not hasattr(h, "set_tunnel"): - if not hasattr(h, "_set_tunnel"): - raise URLError("HTTPS through proxy not supported " - "(Python >= 2.6.4 required)") - else: - # python 2.6 - set_tunnel = h._set_tunnel - else: - set_tunnel = h.set_tunnel - set_tunnel(req._tunnel_host) - - try: - h.request(req.get_method(), req.get_selector(), req.data, headers) - r = h.getresponse() - except socket.error, err: # XXX what error? - raise URLError(err) - - # Pick apart the HTTPResponse object to get the addinfourl - # object initialized properly. - - # Wrap the HTTPResponse object in socket's file object adapter - # for Windows. That adapter calls recv(), so delegate recv() - # to read(). This weird wrapping allows the returned object to - # have readline() and readlines() methods. - - # XXX It might be better to extract the read buffering code - # out of socket._fileobject() and into a base class. - - r.recv = r.read - fp = create_readline_wrapper(r) - - resp = closeable_response(fp, r.msg, req.get_full_url(), - r.status, r.reason) - return resp - - -class HTTPHandler(AbstractHTTPHandler): - - def http_open(self, req): - return self.do_open(httplib.HTTPConnection, req) - - http_request = AbstractHTTPHandler.do_request_ - -if hasattr(httplib, 'HTTPS'): - - class HTTPSConnectionFactory: - def __init__(self, key_file, cert_file): - self._key_file = key_file - self._cert_file = cert_file - def __call__(self, hostport): - return httplib.HTTPSConnection( - hostport, - key_file=self._key_file, cert_file=self._cert_file) - - class HTTPSHandler(AbstractHTTPHandler): - - def __init__(self, client_cert_manager=None): - AbstractHTTPHandler.__init__(self) - self.client_cert_manager = client_cert_manager - - def https_open(self, req): - if self.client_cert_manager is not None: - key_file, cert_file = self.client_cert_manager.find_key_cert( - req.get_full_url()) - conn_factory = HTTPSConnectionFactory(key_file, cert_file) - else: - conn_factory = httplib.HTTPSConnection - return self.do_open(conn_factory, req) - - https_request = AbstractHTTPHandler.do_request_ - -class HTTPCookieProcessor(BaseHandler): - """Handle HTTP cookies. - - Public attributes: - - cookiejar: CookieJar instance - - """ - def __init__(self, cookiejar=None): - if cookiejar is None: - cookiejar = CookieJar() - self.cookiejar = cookiejar - - def http_request(self, request): - self.cookiejar.add_cookie_header(request) - return request - - def http_response(self, request, response): - self.cookiejar.extract_cookies(response, request) - return response - - https_request = http_request - https_response = http_response - -class UnknownHandler(BaseHandler): - def unknown_open(self, req): - type = req.get_type() - raise URLError('unknown url type: %s' % type) - -def parse_keqv_list(l): - """Parse list of key=value strings where keys are not duplicated.""" - parsed = {} - for elt in l: - k, v = elt.split('=', 1) - if v[0] == '"' and v[-1] == '"': - v = v[1:-1] - parsed[k] = v - return parsed - -def parse_http_list(s): - """Parse lists as described by RFC 2068 Section 2. - - In particular, parse comma-separated lists where the elements of - the list may include quoted-strings. A quoted-string could - contain a comma. A non-quoted string could have quotes in the - middle. Neither commas nor quotes count if they are escaped. - Only double-quotes count, not single-quotes. - """ - res = [] - part = '' - - escape = quote = False - for cur in s: - if escape: - part += cur - escape = False - continue - if quote: - if cur == '\\': - escape = True - continue - elif cur == '"': - quote = False - part += cur - continue - - if cur == ',': - res.append(part) - part = '' - continue - - if cur == '"': - quote = True - - part += cur - - # append last part - if part: - res.append(part) - - return [part.strip() for part in res] - -class FileHandler(BaseHandler): - # Use local file or FTP depending on form of URL - def file_open(self, req): - url = req.get_selector() - if url[:2] == '//' and url[2:3] != '/': - req.type = 'ftp' - return self.parent.open(req) - else: - return self.open_local_file(req) - - # names for the localhost - names = None - def get_names(self): - if FileHandler.names is None: - try: - FileHandler.names = (socket.gethostbyname('localhost'), - socket.gethostbyname(socket.gethostname())) - except socket.gaierror: - FileHandler.names = (socket.gethostbyname('localhost'),) - return FileHandler.names - - # not entirely sure what the rules are here - def open_local_file(self, req): - try: - import email.utils as emailutils - except ImportError: - # python 2.4 - import email.Utils as emailutils - import mimetypes - host = req.get_host() - file = req.get_selector() - localfile = url2pathname(file) - try: - stats = os.stat(localfile) - size = stats.st_size - modified = emailutils.formatdate(stats.st_mtime, usegmt=True) - mtype = mimetypes.guess_type(file)[0] - headers = mimetools.Message(StringIO( - 'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' % - (mtype or 'text/plain', size, modified))) - if host: - host, port = splitport(host) - if not host or \ - (not port and socket.gethostbyname(host) in self.get_names()): - return addinfourl(open(localfile, 'rb'), - headers, 'file:'+file) - except OSError, msg: - # urllib2 users shouldn't expect OSErrors coming from urlopen() - raise URLError(msg) - raise URLError('file not on local host') - -class FTPHandler(BaseHandler): - def ftp_open(self, req): - import ftplib - import mimetypes - host = req.get_host() - if not host: - raise URLError('ftp error: no host given') - host, port = splitport(host) - if port is None: - port = ftplib.FTP_PORT - else: - port = int(port) - - # username/password handling - user, host = splituser(host) - if user: - user, passwd = splitpasswd(user) - else: - passwd = None - host = unquote(host) - user = unquote(user or '') - passwd = unquote(passwd or '') - - try: - host = socket.gethostbyname(host) - except socket.error, msg: - raise URLError(msg) - path, attrs = splitattr(req.get_selector()) - dirs = path.split('/') - dirs = map(unquote, dirs) - dirs, file = dirs[:-1], dirs[-1] - if dirs and not dirs[0]: - dirs = dirs[1:] - try: - fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) - type = file and 'I' or 'D' - for attr in attrs: - attr, value = splitvalue(attr) - if attr.lower() == 'type' and \ - value in ('a', 'A', 'i', 'I', 'd', 'D'): - type = value.upper() - fp, retrlen = fw.retrfile(file, type) - headers = "" - mtype = mimetypes.guess_type(req.get_full_url())[0] - if mtype: - headers += "Content-type: %s\n" % mtype - if retrlen is not None and retrlen >= 0: - headers += "Content-length: %d\n" % retrlen - sf = StringIO(headers) - headers = mimetools.Message(sf) - return addinfourl(fp, headers, req.get_full_url()) - except ftplib.all_errors, msg: - raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2] - - def connect_ftp(self, user, passwd, host, port, dirs, timeout): - try: - fw = ftpwrapper(user, passwd, host, port, dirs, timeout) - except TypeError: - # Python < 2.6, no per-connection timeout support - fw = ftpwrapper(user, passwd, host, port, dirs) -## fw.ftp.set_debuglevel(1) - return fw - -class CacheFTPHandler(FTPHandler): - # XXX would be nice to have pluggable cache strategies - # XXX this stuff is definitely not thread safe - def __init__(self): - self.cache = {} - self.timeout = {} - self.soonest = 0 - self.delay = 60 - self.max_conns = 16 - - def setTimeout(self, t): - self.delay = t - - def setMaxConns(self, m): - self.max_conns = m - - def connect_ftp(self, user, passwd, host, port, dirs, timeout): - key = user, host, port, '/'.join(dirs), timeout - if key in self.cache: - self.timeout[key] = time.time() + self.delay - else: - self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout) - self.timeout[key] = time.time() + self.delay - self.check_cache() - return self.cache[key] - - def check_cache(self): - # first check for old ones - t = time.time() - if self.soonest <= t: - for k, v in self.timeout.items(): - if v < t: - self.cache[k].close() - del self.cache[k] - del self.timeout[k] - self.soonest = min(self.timeout.values()) - - # then check the size - if len(self.cache) == self.max_conns: - for k, v in self.timeout.items(): - if v == self.soonest: - del self.cache[k] - del self.timeout[k] - break - self.soonest = min(self.timeout.values()) diff --git a/plugin.video.alfa/lib/mechanize/_useragent.py b/plugin.video.alfa/lib/mechanize/_useragent.py deleted file mode 100755 index 69d120fa..00000000 --- a/plugin.video.alfa/lib/mechanize/_useragent.py +++ /dev/null @@ -1,367 +0,0 @@ -"""Convenient HTTP UserAgent class. - -This is a subclass of urllib2.OpenerDirector. - - -Copyright 2003-2006 John J. Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it under -the terms of the BSD or ZPL 2.1 licenses (see the file COPYING.txt -included with the distribution). - -""" - -import warnings - -import _auth -import _gzip -import _opener -import _response -import _sockettimeout -import _urllib2 - - -class UserAgentBase(_opener.OpenerDirector): - """Convenient user-agent class. - - Do not use .add_handler() to add a handler for something already dealt with - by this code. - - The only reason at present for the distinction between UserAgent and - UserAgentBase is so that classes that depend on .seek()able responses - (e.g. mechanize.Browser) can inherit from UserAgentBase. The subclass - UserAgent exposes a .set_seekable_responses() method that allows switching - off the adding of a .seek() method to responses. - - Public attributes: - - addheaders: list of (name, value) pairs specifying headers to send with - every request, unless they are overridden in the Request instance. - - >>> ua = UserAgentBase() - >>> ua.addheaders = [ - ... ("User-agent", "Mozilla/5.0 (compatible)"), - ... ("From", "responsible.person@example.com")] - - """ - - handler_classes = { - # scheme handlers - "http": _urllib2.HTTPHandler, - # CacheFTPHandler is buggy, at least in 2.3, so we don't use it - "ftp": _urllib2.FTPHandler, - "file": _urllib2.FileHandler, - - # other handlers - "_unknown": _urllib2.UnknownHandler, - # HTTP{S,}Handler depend on HTTPErrorProcessor too - "_http_error": _urllib2.HTTPErrorProcessor, - "_http_default_error": _urllib2.HTTPDefaultErrorHandler, - - # feature handlers - "_basicauth": _urllib2.HTTPBasicAuthHandler, - "_digestauth": _urllib2.HTTPDigestAuthHandler, - "_redirect": _urllib2.HTTPRedirectHandler, - "_cookies": _urllib2.HTTPCookieProcessor, - "_refresh": _urllib2.HTTPRefreshProcessor, - "_equiv": _urllib2.HTTPEquivProcessor, - "_proxy": _urllib2.ProxyHandler, - "_proxy_basicauth": _urllib2.ProxyBasicAuthHandler, - "_proxy_digestauth": _urllib2.ProxyDigestAuthHandler, - "_robots": _urllib2.HTTPRobotRulesProcessor, - "_gzip": _gzip.HTTPGzipProcessor, # experimental! - - # debug handlers - "_debug_redirect": _urllib2.HTTPRedirectDebugProcessor, - "_debug_response_body": _urllib2.HTTPResponseDebugProcessor, - } - - default_schemes = ["http", "ftp", "file"] - default_others = ["_unknown", "_http_error", "_http_default_error"] - default_features = ["_redirect", "_cookies", - "_refresh", "_equiv", - "_basicauth", "_digestauth", - "_proxy", "_proxy_basicauth", "_proxy_digestauth", - "_robots", - ] - if hasattr(_urllib2, 'HTTPSHandler'): - handler_classes["https"] = _urllib2.HTTPSHandler - default_schemes.append("https") - - def __init__(self): - _opener.OpenerDirector.__init__(self) - - ua_handlers = self._ua_handlers = {} - for scheme in (self.default_schemes+ - self.default_others+ - self.default_features): - klass = self.handler_classes[scheme] - ua_handlers[scheme] = klass() - for handler in ua_handlers.itervalues(): - self.add_handler(handler) - - # Yuck. - # Ensure correct default constructor args were passed to - # HTTPRefreshProcessor and HTTPEquivProcessor. - if "_refresh" in ua_handlers: - self.set_handle_refresh(True) - if "_equiv" in ua_handlers: - self.set_handle_equiv(True) - # Ensure default password managers are installed. - pm = ppm = None - if "_basicauth" in ua_handlers or "_digestauth" in ua_handlers: - pm = _urllib2.HTTPPasswordMgrWithDefaultRealm() - if ("_proxy_basicauth" in ua_handlers or - "_proxy_digestauth" in ua_handlers): - ppm = _auth.HTTPProxyPasswordMgr() - self.set_password_manager(pm) - self.set_proxy_password_manager(ppm) - # set default certificate manager - if "https" in ua_handlers: - cm = _urllib2.HTTPSClientCertMgr() - self.set_client_cert_manager(cm) - - def close(self): - _opener.OpenerDirector.close(self) - self._ua_handlers = None - - # XXX -## def set_timeout(self, timeout): -## self._timeout = timeout -## def set_http_connection_cache(self, conn_cache): -## self._http_conn_cache = conn_cache -## def set_ftp_connection_cache(self, conn_cache): -## # XXX ATM, FTP has cache as part of handler; should it be separate? -## self._ftp_conn_cache = conn_cache - - def set_handled_schemes(self, schemes): - """Set sequence of URL scheme (protocol) strings. - - For example: ua.set_handled_schemes(["http", "ftp"]) - - If this fails (with ValueError) because you've passed an unknown - scheme, the set of handled schemes will not be changed. - - """ - want = {} - for scheme in schemes: - if scheme.startswith("_"): - raise ValueError("not a scheme '%s'" % scheme) - if scheme not in self.handler_classes: - raise ValueError("unknown scheme '%s'") - want[scheme] = None - - # get rid of scheme handlers we don't want - for scheme, oldhandler in self._ua_handlers.items(): - if scheme.startswith("_"): continue # not a scheme handler - if scheme not in want: - self._replace_handler(scheme, None) - else: - del want[scheme] # already got it - # add the scheme handlers that are missing - for scheme in want.keys(): - self._set_handler(scheme, True) - - def set_cookiejar(self, cookiejar): - """Set a mechanize.CookieJar, or None.""" - self._set_handler("_cookies", obj=cookiejar) - - # XXX could use Greg Stein's httpx for some of this instead? - # or httplib2?? - def set_proxies(self, proxies=None, proxy_bypass=None): - """Configure proxy settings. - - proxies: dictionary mapping URL scheme to proxy specification. None - means use the default system-specific settings. - proxy_bypass: function taking hostname, returning whether proxy should - be used. None means use the default system-specific settings. - - The default is to try to obtain proxy settings from the system (see the - documentation for urllib.urlopen for information about the - system-specific methods used -- note that's urllib, not urllib2). - - To avoid all use of proxies, pass an empty proxies dict. - - >>> ua = UserAgentBase() - >>> def proxy_bypass(hostname): - ... return hostname == "noproxy.com" - >>> ua.set_proxies( - ... {"http": "joe:password@myproxy.example.com:3128", - ... "ftp": "proxy.example.com"}, - ... proxy_bypass) - - """ - self._set_handler("_proxy", True, - constructor_kwds=dict(proxies=proxies, - proxy_bypass=proxy_bypass)) - - def add_password(self, url, user, password, realm=None): - self._password_manager.add_password(realm, url, user, password) - def add_proxy_password(self, user, password, hostport=None, realm=None): - self._proxy_password_manager.add_password( - realm, hostport, user, password) - - def add_client_certificate(self, url, key_file, cert_file): - """Add an SSL client certificate, for HTTPS client auth. - - key_file and cert_file must be filenames of the key and certificate - files, in PEM format. You can use e.g. OpenSSL to convert a p12 (PKCS - 12) file to PEM format: - - openssl pkcs12 -clcerts -nokeys -in cert.p12 -out cert.pem - openssl pkcs12 -nocerts -in cert.p12 -out key.pem - - - Note that client certificate password input is very inflexible ATM. At - the moment this seems to be console only, which is presumably the - default behaviour of libopenssl. In future mechanize may support - third-party libraries that (I assume) allow more options here. - - """ - self._client_cert_manager.add_key_cert(url, key_file, cert_file) - - # the following are rarely useful -- use add_password / add_proxy_password - # instead - def set_password_manager(self, password_manager): - """Set a mechanize.HTTPPasswordMgrWithDefaultRealm, or None.""" - self._password_manager = password_manager - self._set_handler("_basicauth", obj=password_manager) - self._set_handler("_digestauth", obj=password_manager) - def set_proxy_password_manager(self, password_manager): - """Set a mechanize.HTTPProxyPasswordMgr, or None.""" - self._proxy_password_manager = password_manager - self._set_handler("_proxy_basicauth", obj=password_manager) - self._set_handler("_proxy_digestauth", obj=password_manager) - def set_client_cert_manager(self, cert_manager): - """Set a mechanize.HTTPClientCertMgr, or None.""" - self._client_cert_manager = cert_manager - handler = self._ua_handlers["https"] - handler.client_cert_manager = cert_manager - - # these methods all take a boolean parameter - def set_handle_robots(self, handle): - """Set whether to observe rules from robots.txt.""" - self._set_handler("_robots", handle) - def set_handle_redirect(self, handle): - """Set whether to handle HTTP 30x redirections.""" - self._set_handler("_redirect", handle) - def set_handle_refresh(self, handle, max_time=None, honor_time=True): - """Set whether to handle HTTP Refresh headers.""" - self._set_handler("_refresh", handle, constructor_kwds= - {"max_time": max_time, "honor_time": honor_time}) - def set_handle_equiv(self, handle, head_parser_class=None): - """Set whether to treat HTML http-equiv headers like HTTP headers. - - Response objects may be .seek()able if this is set (currently returned - responses are, raised HTTPError exception responses are not). - - """ - if head_parser_class is not None: - constructor_kwds = {"head_parser_class": head_parser_class} - else: - constructor_kwds={} - self._set_handler("_equiv", handle, constructor_kwds=constructor_kwds) - def set_handle_gzip(self, handle): - """Handle gzip transfer encoding. - - """ - if handle: - warnings.warn( - "gzip transfer encoding is experimental!", stacklevel=2) - self._set_handler("_gzip", handle) - def set_debug_redirects(self, handle): - """Log information about HTTP redirects (including refreshes). - - Logging is performed using module logging. The logger name is - "mechanize.http_redirects". To actually print some debug output, - eg: - - import sys, logging - logger = logging.getLogger("mechanize.http_redirects") - logger.addHandler(logging.StreamHandler(sys.stdout)) - logger.setLevel(logging.INFO) - - Other logger names relevant to this module: - - "mechanize.http_responses" - "mechanize.cookies" - - To turn on everything: - - import sys, logging - logger = logging.getLogger("mechanize") - logger.addHandler(logging.StreamHandler(sys.stdout)) - logger.setLevel(logging.INFO) - - """ - self._set_handler("_debug_redirect", handle) - def set_debug_responses(self, handle): - """Log HTTP response bodies. - - See docstring for .set_debug_redirects() for details of logging. - - Response objects may be .seek()able if this is set (currently returned - responses are, raised HTTPError exception responses are not). - - """ - self._set_handler("_debug_response_body", handle) - def set_debug_http(self, handle): - """Print HTTP headers to sys.stdout.""" - level = int(bool(handle)) - for scheme in "http", "https": - h = self._ua_handlers.get(scheme) - if h is not None: - h.set_http_debuglevel(level) - - def _set_handler(self, name, handle=None, obj=None, - constructor_args=(), constructor_kwds={}): - if handle is None: - handle = obj is not None - if handle: - handler_class = self.handler_classes[name] - if obj is not None: - newhandler = handler_class(obj) - else: - newhandler = handler_class( - *constructor_args, **constructor_kwds) - else: - newhandler = None - self._replace_handler(name, newhandler) - - def _replace_handler(self, name, newhandler=None): - # first, if handler was previously added, remove it - if name is not None: - handler = self._ua_handlers.get(name) - if handler: - try: - self.handlers.remove(handler) - except ValueError: - pass - # then add the replacement, if any - if newhandler is not None: - self.add_handler(newhandler) - self._ua_handlers[name] = newhandler - - -class UserAgent(UserAgentBase): - - def __init__(self): - UserAgentBase.__init__(self) - self._seekable = False - - def set_seekable_responses(self, handle): - """Make response objects .seek()able.""" - self._seekable = bool(handle) - - def open(self, fullurl, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - if self._seekable: - def bound_open(fullurl, data=None, - timeout=_sockettimeout._GLOBAL_DEFAULT_TIMEOUT): - return UserAgentBase.open(self, fullurl, data, timeout) - response = _opener.wrapped_open( - bound_open, _response.seek_wrapped_response, fullurl, data, - timeout) - else: - response = UserAgentBase.open(self, fullurl, data) - return response diff --git a/plugin.video.alfa/lib/mechanize/_util.py b/plugin.video.alfa/lib/mechanize/_util.py deleted file mode 100755 index 22f07ae8..00000000 --- a/plugin.video.alfa/lib/mechanize/_util.py +++ /dev/null @@ -1,305 +0,0 @@ -"""Utility functions and date/time routines. - - Copyright 2002-2006 John J Lee <jjl@pobox.com> - -This code is free software; you can redistribute it and/or modify it -under the terms of the BSD or ZPL 2.1 licenses (see the file -COPYING.txt included with the distribution). -""" - -import re -import time -import warnings - - -class ExperimentalWarning(UserWarning): - pass - -def experimental(message): - warnings.warn(message, ExperimentalWarning, stacklevel=3) -def hide_experimental_warnings(): - warnings.filterwarnings("ignore", category=ExperimentalWarning) -def reset_experimental_warnings(): - warnings.filterwarnings("default", category=ExperimentalWarning) - -def deprecation(message): - warnings.warn(message, DeprecationWarning, stacklevel=3) -def hide_deprecations(): - warnings.filterwarnings("ignore", category=DeprecationWarning) -def reset_deprecations(): - warnings.filterwarnings("default", category=DeprecationWarning) - - -def write_file(filename, data): - f = open(filename, "wb") - try: - f.write(data) - finally: - f.close() - - -def get1(sequence): - assert len(sequence) == 1 - return sequence[0] - - -def isstringlike(x): - try: x+"" - except: return False - else: return True - -## def caller(): -## try: -## raise SyntaxError -## except: -## import sys -## return sys.exc_traceback.tb_frame.f_back.f_back.f_code.co_name - - -from calendar import timegm - -# Date/time conversion routines for formats used by the HTTP protocol. - -EPOCH = 1970 -def my_timegm(tt): - year, month, mday, hour, min, sec = tt[:6] - if ((year >= EPOCH) and (1 <= month <= 12) and (1 <= mday <= 31) and - (0 <= hour <= 24) and (0 <= min <= 59) and (0 <= sec <= 61)): - return timegm(tt) - else: - return None - -days = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"] -months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", - "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] -months_lower = [] -for month in months: months_lower.append(month.lower()) - - -def time2isoz(t=None): - """Return a string representing time in seconds since epoch, t. - - If the function is called without an argument, it will use the current - time. - - The format of the returned string is like "YYYY-MM-DD hh:mm:ssZ", - representing Universal Time (UTC, aka GMT). An example of this format is: - - 1994-11-24 08:49:37Z - - """ - if t is None: t = time.time() - year, mon, mday, hour, min, sec = time.gmtime(t)[:6] - return "%04d-%02d-%02d %02d:%02d:%02dZ" % ( - year, mon, mday, hour, min, sec) - -def time2netscape(t=None): - """Return a string representing time in seconds since epoch, t. - - If the function is called without an argument, it will use the current - time. - - The format of the returned string is like this: - - Wed, DD-Mon-YYYY HH:MM:SS GMT - - """ - if t is None: t = time.time() - year, mon, mday, hour, min, sec, wday = time.gmtime(t)[:7] - return "%s %02d-%s-%04d %02d:%02d:%02d GMT" % ( - days[wday], mday, months[mon-1], year, hour, min, sec) - - -UTC_ZONES = {"GMT": None, "UTC": None, "UT": None, "Z": None} - -timezone_re = re.compile(r"^([-+])?(\d\d?):?(\d\d)?$") -def offset_from_tz_string(tz): - offset = None - if UTC_ZONES.has_key(tz): - offset = 0 - else: - m = timezone_re.search(tz) - if m: - offset = 3600 * int(m.group(2)) - if m.group(3): - offset = offset + 60 * int(m.group(3)) - if m.group(1) == '-': - offset = -offset - return offset - -def _str2time(day, mon, yr, hr, min, sec, tz): - # translate month name to number - # month numbers start with 1 (January) - try: - mon = months_lower.index(mon.lower())+1 - except ValueError: - # maybe it's already a number - try: - imon = int(mon) - except ValueError: - return None - if 1 <= imon <= 12: - mon = imon - else: - return None - - # make sure clock elements are defined - if hr is None: hr = 0 - if min is None: min = 0 - if sec is None: sec = 0 - - yr = int(yr) - day = int(day) - hr = int(hr) - min = int(min) - sec = int(sec) - - if yr < 1000: - # find "obvious" year - cur_yr = time.localtime(time.time())[0] - m = cur_yr % 100 - tmp = yr - yr = yr + cur_yr - m - m = m - tmp - if abs(m) > 50: - if m > 0: yr = yr + 100 - else: yr = yr - 100 - - # convert UTC time tuple to seconds since epoch (not timezone-adjusted) - t = my_timegm((yr, mon, day, hr, min, sec, tz)) - - if t is not None: - # adjust time using timezone string, to get absolute time since epoch - if tz is None: - tz = "UTC" - tz = tz.upper() - offset = offset_from_tz_string(tz) - if offset is None: - return None - t = t - offset - - return t - - -strict_re = re.compile(r"^[SMTWF][a-z][a-z], (\d\d) ([JFMASOND][a-z][a-z]) " - r"(\d\d\d\d) (\d\d):(\d\d):(\d\d) GMT$") -wkday_re = re.compile( - r"^(?:Sun|Mon|Tue|Wed|Thu|Fri|Sat)[a-z]*,?\s*", re.I) -loose_http_re = re.compile( - r"""^ - (\d\d?) # day - (?:\s+|[-\/]) - (\w+) # month - (?:\s+|[-\/]) - (\d+) # year - (?: - (?:\s+|:) # separator before clock - (\d\d?):(\d\d) # hour:min - (?::(\d\d))? # optional seconds - )? # optional clock - \s* - ([-+]?\d{2,4}|(?![APap][Mm]\b)[A-Za-z]+)? # timezone - \s* - (?:\(\w+\))? # ASCII representation of timezone in parens. - \s*$""", re.X) -def http2time(text): - """Returns time in seconds since epoch of time represented by a string. - - Return value is an integer. - - None is returned if the format of str is unrecognized, the time is outside - the representable range, or the timezone string is not recognized. If the - string contains no timezone, UTC is assumed. - - The timezone in the string may be numerical (like "-0800" or "+0100") or a - string timezone (like "UTC", "GMT", "BST" or "EST"). Currently, only the - timezone strings equivalent to UTC (zero offset) are known to the function. - - The function loosely parses the following formats: - - Wed, 09 Feb 1994 22:23:32 GMT -- HTTP format - Tuesday, 08-Feb-94 14:15:29 GMT -- old rfc850 HTTP format - Tuesday, 08-Feb-1994 14:15:29 GMT -- broken rfc850 HTTP format - 09 Feb 1994 22:23:32 GMT -- HTTP format (no weekday) - 08-Feb-94 14:15:29 GMT -- rfc850 format (no weekday) - 08-Feb-1994 14:15:29 GMT -- broken rfc850 format (no weekday) - - The parser ignores leading and trailing whitespace. The time may be - absent. - - If the year is given with only 2 digits, the function will select the - century that makes the year closest to the current date. - - """ - # fast exit for strictly conforming string - m = strict_re.search(text) - if m: - g = m.groups() - mon = months_lower.index(g[1].lower()) + 1 - tt = (int(g[2]), mon, int(g[0]), - int(g[3]), int(g[4]), float(g[5])) - return my_timegm(tt) - - # No, we need some messy parsing... - - # clean up - text = text.lstrip() - text = wkday_re.sub("", text, 1) # Useless weekday - - # tz is time zone specifier string - day, mon, yr, hr, min, sec, tz = [None]*7 - - # loose regexp parse - m = loose_http_re.search(text) - if m is not None: - day, mon, yr, hr, min, sec, tz = m.groups() - else: - return None # bad format - - return _str2time(day, mon, yr, hr, min, sec, tz) - - -iso_re = re.compile( - """^ - (\d{4}) # year - [-\/]? - (\d\d?) # numerical month - [-\/]? - (\d\d?) # day - (?: - (?:\s+|[-:Tt]) # separator before clock - (\d\d?):?(\d\d) # hour:min - (?::?(\d\d(?:\.\d*)?))? # optional seconds (and fractional) - )? # optional clock - \s* - ([-+]?\d\d?:?(:?\d\d)? - |Z|z)? # timezone (Z is "zero meridian", i.e. GMT) - \s*$""", re.X) -def iso2time(text): - """ - As for http2time, but parses the ISO 8601 formats: - - 1994-02-03 14:15:29 -0100 -- ISO 8601 format - 1994-02-03 14:15:29 -- zone is optional - 1994-02-03 -- only date - 1994-02-03T14:15:29 -- Use T as separator - 19940203T141529Z -- ISO 8601 compact format - 19940203 -- only date - - """ - # clean up - text = text.lstrip() - - # tz is time zone specifier string - day, mon, yr, hr, min, sec, tz = [None]*7 - - # loose regexp parse - m = iso_re.search(text) - if m is not None: - # XXX there's an extra bit of the timezone I'm ignoring here: is - # this the right thing to do? - yr, mon, day, hr, min, sec, tz, _ = m.groups() - else: - return None # bad format - - return _str2time(day, mon, yr, hr, min, sec, tz) diff --git a/plugin.video.alfa/lib/mechanize/_version.py b/plugin.video.alfa/lib/mechanize/_version.py deleted file mode 100755 index 171de56f..00000000 --- a/plugin.video.alfa/lib/mechanize/_version.py +++ /dev/null @@ -1,2 +0,0 @@ -"0.2.5" -__version__ = (0, 2, 5, None, None) From 01675f66ad8bd3ce63a48e15d84b85a6cd3d7b35 Mon Sep 17 00:00:00 2001 From: Intel1 <luisriverap@hotmail.com> Date: Fri, 7 Sep 2018 11:33:06 -0500 Subject: [PATCH 16/34] repelis --- plugin.video.alfa/channels/repelis.py | 1 - 1 file changed, 1 deletion(-) diff --git a/plugin.video.alfa/channels/repelis.py b/plugin.video.alfa/channels/repelis.py index 2a6e756d..dff9c978 100644 --- a/plugin.video.alfa/channels/repelis.py +++ b/plugin.video.alfa/channels/repelis.py @@ -9,7 +9,6 @@ from channelselector import get_thumb from channels import autoplay from channels import filtertools from core import httptools -from core import jsontools from core import scrapertools from core import servertools from core import tmdb From acec5ff234a4fd5bcbfb66f53e3b019213273ba1 Mon Sep 17 00:00:00 2001 From: Intel1 <luisriverap@hotmail.com> Date: Tue, 11 Sep 2018 16:56:52 -0500 Subject: [PATCH 17/34] Eliminados seriecanal: web casi todo: solo para "donadores" tusfalise, vidspot: servidores no funcionan --- plugin.video.alfa/channels/seriecanal.json | 2 +- plugin.video.alfa/channels/seriecanal.py | 88 ++++++++-------------- plugin.video.alfa/servers/tusfiles.py | 53 ------------- plugin.video.alfa/servers/vidspot.json | 73 ------------------ plugin.video.alfa/servers/vidspot.py | 57 -------------- 5 files changed, 32 insertions(+), 241 deletions(-) delete mode 100755 plugin.video.alfa/servers/tusfiles.py delete mode 100755 plugin.video.alfa/servers/vidspot.json delete mode 100755 plugin.video.alfa/servers/vidspot.py diff --git a/plugin.video.alfa/channels/seriecanal.json b/plugin.video.alfa/channels/seriecanal.json index b3166f5f..e53459ae 100644 --- a/plugin.video.alfa/channels/seriecanal.json +++ b/plugin.video.alfa/channels/seriecanal.json @@ -1,7 +1,7 @@ { "id": "seriecanal", "name": "Seriecanal", - "active": true, + "active": false, "adult": false, "language": ["cast"], "thumbnail": "http://i.imgur.com/EwMK8Yd.png", diff --git a/plugin.video.alfa/channels/seriecanal.py b/plugin.video.alfa/channels/seriecanal.py index 0ac2bfb4..843966c8 100644 --- a/plugin.video.alfa/channels/seriecanal.py +++ b/plugin.video.alfa/channels/seriecanal.py @@ -4,12 +4,14 @@ import re import urllib import urlparse +from core import httptools from core import scrapertools from core import servertools +from core import tmdb from platformcode import config, logger __modo_grafico__ = config.get_setting('modo_grafico', "seriecanal") -__perfil__ = config.get_setting('perfil', "descargasmix") +__perfil__ = config.get_setting('perfil', "seriecanal") # Fijar perfil de color perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'], @@ -17,23 +19,21 @@ perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'], ['0xFF58D3F7', '0xFF2E9AFE', '0xFF2E64FE']] color1, color2, color3 = perfil[__perfil__] -URL_BASE = "http://www.seriecanal.com/" +host = "https://www.seriecanal.com/" def login(): logger.info() - data = scrapertools.downloadpage(URL_BASE) + data = httptools.downloadpage(host).data if "Cerrar Sesion" in data: return True, "" - usuario = config.get_setting("user", "seriecanal") password = config.get_setting("password", "seriecanal") if usuario == "" or password == "": return False, 'Regístrate en www.seriecanal.com e introduce tus datos en "Configurar Canal"' else: post = urllib.urlencode({'username': usuario, 'password': password}) - data = scrapertools.downloadpage("http://www.seriecanal.com/index.php?page=member&do=login&tarea=acceder", - post=post) + data = httptools.downloadpage(host + "index.php?page=member&do=login&tarea=acceder", post=post).data if "Bienvenid@, se ha identificado correctamente en nuestro sistema" in data: return True, "" else: @@ -44,18 +44,15 @@ def mainlist(item): logger.info() itemlist = [] item.text_color = color1 - result, message = login() if result: - itemlist.append(item.clone(action="series", title="Últimos episodios", url=URL_BASE)) + itemlist.append(item.clone(action="series", title="Últimos episodios", url=host)) itemlist.append(item.clone(action="genero", title="Series por género")) itemlist.append(item.clone(action="alfabetico", title="Series por orden alfabético")) itemlist.append(item.clone(action="search", title="Buscar...")) else: itemlist.append(item.clone(action="", title=message, text_color="red")) - itemlist.append(item.clone(action="configuracion", title="Configurar canal...", text_color="gold", folder=False)) - return itemlist @@ -68,7 +65,7 @@ def configuracion(item): def search(item, texto): logger.info() - item.url = "http://www.seriecanal.com/index.php?page=portada&do=category&method=post&category_id=0&order=" \ + item.url = host + "index.php?page=portada&do=category&method=post&category_id=0&order=" \ "C_Create&view=thumb&pgs=1&p2=1" try: post = "keyserie=" + texto @@ -85,27 +82,24 @@ def search(item, texto): def genero(item): logger.info() itemlist = [] - data = scrapertools.downloadpage(URL_BASE) + data = httptools.downloadpage(host).data data = scrapertools.find_single_match(data, '<ul class="tag-cloud">(.*?)</ul>') - matches = scrapertools.find_multiple_matches(data, '<a.*?href="([^"]+)">([^"]+)</a>') for scrapedurl, scrapedtitle in matches: scrapedtitle = scrapedtitle.capitalize() - url = urlparse.urljoin(URL_BASE, scrapedurl) + url = urlparse.urljoin(host, scrapedurl) itemlist.append(item.clone(action="series", title=scrapedtitle, url=url)) - return itemlist def alfabetico(item): logger.info() itemlist = [] - data = scrapertools.downloadpage(URL_BASE) + data = httptools.downloadpage(host).data data = scrapertools.find_single_match(data, '<ul class="pagination pagination-sm" style="margin:5px 0;">(.*?)</ul>') - matches = scrapertools.find_multiple_matches(data, '<a.*?href="([^"]+)">([^"]+)</a>') for scrapedurl, scrapedtitle in matches: - url = urlparse.urljoin(URL_BASE, scrapedurl) + url = urlparse.urljoin(host, scrapedurl) itemlist.append(item.clone(action="series", title=scrapedtitle, url=url)) return itemlist @@ -115,45 +109,38 @@ def series(item): itemlist = [] item.infoLabels = {} item.text_color = color2 - if item.extra != "": - data = scrapertools.downloadpage(item.url, post=item.extra) + data = httptools.downloadpage(item.url, post=item.extra).data else: - data = scrapertools.downloadpage(item.url) + data = httptools.downloadpage(item.url).data data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) patron = '<div class="item-inner" style="margin: 0 20px 0px 0\;"><img src="([^"]+)".*?' \ 'href="([^"]+)" title="Click para Acceder a la Ficha(?:\|([^"]+)|)".*?' \ '<strong>([^"]+)</strong></a>.*?<strong>([^"]+)</strong></p>.*?' \ '<p class="text-warning".*?\;">(.*?)</p>' - matches = scrapertools.find_multiple_matches(data, patron) - for scrapedthumbnail, scrapedurl, scrapedplot, scrapedtitle, scrapedtemp, scrapedepi in matches: title = scrapedtitle + " - " + scrapedtemp + " - " + scrapedepi - url = urlparse.urljoin(URL_BASE, scrapedurl) - temporada = scrapertools.find_single_match(scrapedtemp, "(\d+)") - new_item = item.clone() - new_item.contentType = "tvshow" + url = urlparse.urljoin(host, scrapedurl) + temporada = scrapertools.find_single_match(scrapedtemp, "\d+") + episode = scrapertools.find_single_match(scrapedepi, "\d+") + #item.contentType = "tvshow" if temporada != "": - new_item.infoLabels['season'] = temporada - new_item.contentType = "season" - - logger.debug("title=[" + title + "], url=[" + url + "], thumbnail=[" + scrapedthumbnail + "]") - itemlist.append(new_item.clone(action="findvideos", title=title, fulltitle=scrapedtitle, url=url, - thumbnail=scrapedthumbnail, plot=scrapedplot, contentTitle=scrapedtitle, - context=["buscar_trailer"], show=scrapedtitle)) - - try: - from core import tmdb - tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__) - except: - pass + item.infoLabels['season'] = temporada + #item.contentType = "season" + if episode != "": + item.infoLabels['episode'] = episode + #item.contentType = "episode" + itemlist.append(item.clone(action="findvideos", title=title, url=url, + contentSerieName=scrapedtitle, + context=["buscar_trailer"])) + tmdb.set_infoLabels(itemlist) # Extra marca siguiente página next_page = scrapertools.find_single_match(data, '<a href="([^"]+)" (?:onclick="return false;" |)title=' '"Página Siguiente"') if next_page != "/": - url = urlparse.urljoin(URL_BASE, next_page) + url = urlparse.urljoin(host, next_page) itemlist.append(item.clone(action="series", title=">> Siguiente", url=url, text_color=color3)) return itemlist @@ -163,10 +150,8 @@ def findvideos(item): logger.info() itemlist = [] item.text_color = color3 - - data = scrapertools.downloadpage(item.url) + data = httptools.downloadpage(item.url).data data = scrapertools.decodeHtmlentities(data) - # Busca en la seccion descarga/torrent data_download = scrapertools.find_single_match(data, '<th>Episodio - Enlaces de Descarga</th>(.*?)</table>') patron = '<p class="item_name".*?<a href="([^"]+)".*?>([^"]+)</a>' @@ -178,18 +163,15 @@ def findvideos(item): else: scrapedtitle = "[Torrent] " + scrapedepi scrapedtitle = scrapertools.htmlclean(scrapedtitle) - new_item.infoLabels['episode'] = scrapertools.find_single_match(scrapedtitle, "Episodio (\d+)") logger.debug("title=[" + scrapedtitle + "], url=[" + scrapedurl + "]") itemlist.append(new_item.clone(action="play", title=scrapedtitle, url=scrapedurl, server="torrent", contentType="episode")) - # Busca en la seccion online data_online = scrapertools.find_single_match(data, "<th>Enlaces de Visionado Online</th>(.*?)</table>") patron = '<a href="([^"]+)\\n.*?src="([^"]+)".*?' \ 'title="Enlace de Visionado Online">([^"]+)</a>' matches = scrapertools.find_multiple_matches(data_online, patron) - for scrapedurl, scrapedthumb, scrapedtitle in matches: # Deshecha enlaces de trailers scrapedtitle = scrapertools.htmlclean(scrapedtitle) @@ -200,7 +182,6 @@ def findvideos(item): new_item.infoLabels['episode'] = scrapertools.find_single_match(scrapedtitle, "Episodio (\d+)") itemlist.append(new_item.clone(action="play", title=title, url=scrapedurl, contentType="episode")) - # Comprueba si hay otras temporadas if not "No hay disponible ninguna Temporada adicional" in data: data_temp = scrapertools.find_single_match(data, '<div class="panel panel-success">(.*?)</table>') @@ -210,7 +191,7 @@ def findvideos(item): matches = scrapertools.find_multiple_matches(data_temp, patron) for scrapedurl, scrapedtitle in matches: new_item = item.clone() - url = urlparse.urljoin(URL_BASE, scrapedurl) + url = urlparse.urljoin(host, scrapedurl) scrapedtitle = scrapedtitle.capitalize() temporada = scrapertools.find_single_match(scrapedtitle, "Temporada (\d+)") if temporada != "": @@ -218,13 +199,7 @@ def findvideos(item): new_item.infoLabels['episode'] = "" itemlist.append(new_item.clone(action="findvideos", title=scrapedtitle, url=url, text_color="red", contentType="season")) - - try: - from core import tmdb - tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__) - except: - pass - + tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__) new_item = item.clone() if config.is_xbmc(): new_item.contextual = True @@ -236,7 +211,6 @@ def findvideos(item): def play(item): logger.info() itemlist = [] - if item.extra == "torrent": itemlist.append(item.clone()) else: diff --git a/plugin.video.alfa/servers/tusfiles.py b/plugin.video.alfa/servers/tusfiles.py deleted file mode 100755 index 9b389558..00000000 --- a/plugin.video.alfa/servers/tusfiles.py +++ /dev/null @@ -1,53 +0,0 @@ -# -*- coding: utf-8 -*- - -from core import httptools -from core import scrapertools -from platformcode import logger - - -def test_video_exists(page_url): - logger.info("(page_url='%s')" % page_url) - - if "tusfiles.net" in page_url: - data = httptools.downloadpage(page_url).data - - if "File Not Found" in data: - return False, "[Tusfiles] El archivo no existe o ha sido borrado" - if "download is no longer available" in data: - return False, "[Tusfiles] El archivo ya no está disponible" - - return True, "" - - -def get_video_url(page_url, premium=False, user="", password="", video_password=""): - logger.info("page_url='%s'" % page_url) - - # Saca el código del vídeo - data = httptools.downloadpage(page_url).data.replace("\\", "") - video_urls = [] - - if "tusfiles.org" in page_url: - matches = scrapertools.find_multiple_matches(data, - '"label"\s*:\s*(.*?),"type"\s*:\s*"([^"]+)","file"\s*:\s*"([^"]+)"') - for calidad, tipo, video_url in matches: - tipo = tipo.replace("video/", "") - video_urls.append([".%s %sp [tusfiles]" % (tipo, calidad), video_url]) - - video_urls.sort(key=lambda it: int(it[0].split("p ", 1)[0].rsplit(" ")[1])) - else: - matches = scrapertools.find_multiple_matches(data, '<source src="([^"]+)" type="([^"]+)"') - for video_url, tipo in matches: - tipo = tipo.replace("video/", "") - video_urls.append([".%s [tusfiles]" % tipo, video_url]) - - id = scrapertools.find_single_match(data, 'name="id" value="([^"]+)"') - rand = scrapertools.find_single_match(data, 'name="rand" value="([^"]+)"') - if id and rand: - post = "op=download2&id=%s&rand=%s&referer=&method_free=&method_premium=" % (id, rand) - location = httptools.downloadpage(page_url, post, follow_redirects=False, only_headers=True).headers.get( - "location") - if location: - ext = location[-4:] - video_urls.append(["%s [tusfiles]" % ext, location]) - - return video_urls diff --git a/plugin.video.alfa/servers/vidspot.json b/plugin.video.alfa/servers/vidspot.json deleted file mode 100755 index e19002e8..00000000 --- a/plugin.video.alfa/servers/vidspot.json +++ /dev/null @@ -1,73 +0,0 @@ -{ - "active": true, - "find_videos": { - "ignore_urls": [ - "http://vidspot.net/embed-theme.html", - "http://vidspot.net/embed-jquery.html", - "http://vidspot.net/embed-s.html", - "http://vidspot.net/embed-images.html", - "http://vidspot.net/embed-faq.html", - "http://vidspot.net/embed-embed.html", - "http://vidspot.net/embed-ri.html", - "http://vidspot.net/embed-d.html", - "http://vidspot.net/embed-css.html", - "http://vidspot.net/embed-js.html", - "http://vidspot.net/embed-player.html", - "http://vidspot.net/embed-cgi.html", - "http://vidspot.net/embed-i.html", - "http://vidspot.net/images", - "http://vidspot.net/theme", - "http://vidspot.net/xupload", - "http://vidspot.net/s", - "http://vidspot.net/js", - "http://vidspot.net/jquery", - "http://vidspot.net/login", - "http://vidspot.net/make", - "http://vidspot.net/i", - "http://vidspot.net/faq", - "http://vidspot.net/tos", - "http://vidspot.net/premium", - "http://vidspot.net/checkfiles", - "http://vidspot.net/privacy", - "http://vidspot.net/refund", - "http://vidspot.net/links", - "http://vidspot.net/contact" - ], - "patterns": [ - { - "pattern": "vidspot.(?:net/|php\\?id=)(?:embed-)?([a-z0-9]+)", - "url": "http://vidspot.net/\\1" - } - ] - }, - "free": true, - "id": "vidspot", - "name": "vidspot", - "settings": [ - { - "default": false, - "enabled": true, - "id": "black_list", - "label": "@60654", - "type": "bool", - "visible": true - }, - { - "default": 0, - "enabled": true, - "id": "favorites_servers_list", - "label": "@60655", - "lvalues": [ - "No", - "1", - "2", - "3", - "4", - "5" - ], - "type": "list", - "visible": false - } - ], - "thumbnail": "server_vidspot.png" -} \ No newline at end of file diff --git a/plugin.video.alfa/servers/vidspot.py b/plugin.video.alfa/servers/vidspot.py deleted file mode 100755 index e5dd133e..00000000 --- a/plugin.video.alfa/servers/vidspot.py +++ /dev/null @@ -1,57 +0,0 @@ -# -*- coding: utf-8 -*- - -from core import scrapertools -from platformcode import logger - - -def test_video_exists(page_url): - logger.info("(page_url='%s')" % page_url) - - # No existe / borrado: http://vidspot.net/8jcgbrzhujri - data = scrapertools.cache_page("http://anonymouse.org/cgi-bin/anon-www.cgi/" + page_url) - if "File Not Found" in data or "Archivo no encontrado" in data or '<b class="err">Deleted' in data \ - or '<b class="err">Removed' in data or '<font class="err">No such' in data: - return False, "No existe o ha sido borrado de vidspot" - - return True, "" - - -def get_video_url(page_url, premium=False, user="", password="", video_password=""): - logger.info("url=%s" % page_url) - - # Normaliza la URL - videoid = scrapertools.get_match(page_url, "http://vidspot.net/([a-z0-9A-Z]+)") - page_url = "http://vidspot.net/embed-%s-728x400.html" % videoid - data = scrapertools.cachePage(page_url) - if "Access denied" in data: - geobloqueo = True - else: - geobloqueo = False - - if geobloqueo: - url = "http://www.videoproxy.co/hide.php" - post = "go=%s" % page_url - location = scrapertools.get_header_from_response(url, post=post, header_to_get="location") - url = "http://www.videoproxy.co/%s" % location - data = scrapertools.cachePage(url) - - # Extrae la URL - media_url = scrapertools.find_single_match(data, '"file" : "([^"]+)",') - - video_urls = [] - - if media_url != "": - if geobloqueo: - url = "http://www.videoproxy.co/hide.php" - post = "go=%s" % media_url - location = scrapertools.get_header_from_response(url, post=post, header_to_get="location") - media_url = "http://www.videoproxy.co/%s&direct=false" % location - else: - media_url += "&direct=false" - - video_urls.append([scrapertools.get_filename_from_url(media_url)[-4:] + " [vidspot]", media_url]) - - for video_url in video_urls: - logger.info("%s - %s" % (video_url[0], video_url[1])) - - return video_urls From baa2bb87f9be5754d0e9e05e7235b933e6a52378 Mon Sep 17 00:00:00 2001 From: Intel1 <luisriverap@hotmail.com> Date: Tue, 11 Sep 2018 17:01:58 -0500 Subject: [PATCH 18/34] Varios 2 danimados: agregado buscador del canal sipeliculas: fix play megadrive: nuevo server --- plugin.video.alfa/channels/danimados.json | 10 ++ plugin.video.alfa/channels/danimados.py | 101 ++++++++++-------- plugin.video.alfa/channels/sipeliculas.py | 53 +++------ .../servers/{tusfiles.json => megadrive.json} | 17 ++- plugin.video.alfa/servers/megadrive.py | 27 +++++ 5 files changed, 118 insertions(+), 90 deletions(-) rename plugin.video.alfa/servers/{tusfiles.json => megadrive.json} (68%) mode change 100755 => 100644 create mode 100644 plugin.video.alfa/servers/megadrive.py diff --git a/plugin.video.alfa/channels/danimados.json b/plugin.video.alfa/channels/danimados.json index 0bd6230e..44d5a628 100644 --- a/plugin.video.alfa/channels/danimados.json +++ b/plugin.video.alfa/channels/danimados.json @@ -8,5 +8,15 @@ "banner": "https://imgur.com/xG5xqBq.png", "categories": [ "tvshow" + ], + "settings": [ + { + "id": "include_in_global_search", + "type": "bool", + "label": "Incluir en busqueda global", + "default": true, + "enabled": true, + "visible": true + } ] } diff --git a/plugin.video.alfa/channels/danimados.py b/plugin.video.alfa/channels/danimados.py index 32891a44..38c6e878 100644 --- a/plugin.video.alfa/channels/danimados.py +++ b/plugin.video.alfa/channels/danimados.py @@ -1,6 +1,7 @@ # -*- coding: utf-8 -*- import re +import base64 from channelselector import get_thumb from core import httptools @@ -22,48 +23,64 @@ list_quality = ['default'] def mainlist(item): logger.info() - thumb_series = get_thumb("channels_tvshow.png") autoplay.init(item.channel, list_servers, list_quality) - itemlist = list() - itemlist.append(Item(channel=item.channel, action="mainpage", title="Categorías", url=host, thumbnail=thumb_series)) - itemlist.append(Item(channel=item.channel, action="mainpage", title="Más Populares", url=host, - thumbnail=thumb_series)) itemlist.append(Item(channel=item.channel, action="lista", title="Peliculas Animadas", url=host+"peliculas/", thumbnail=thumb_series)) + itemlist.append(Item(channel=item.channel, action="search", title="Buscar", url=host + "?s=", + thumbnail=thumb_series)) autoplay.show_option(item.channel, itemlist) return itemlist -""" def search(item, texto): logger.info() texto = texto.replace(" ","+") - item.url = item.url+texto + item.url = host + "?s=" + texto if texto!='': - return lista(item) -""" + return sub_search(item) + + +def sub_search(item): + logger.info() + itemlist = [] + data = httptools.downloadpage(item.url).data + patron = 'class="thumbnail animation-.*?href="([^"]+).*?' + patron += 'img src="([^"]+).*?' + patron += 'alt="([^"]+).*?' + patron += 'class="year">(\d{4})' + matches = scrapertools.find_multiple_matches(data, patron) + for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedyear in matches: + item.action = "findvideos" + item.contentTitle = scrapedtitle + item.contentSerieName = "" + if "serie" in scrapedurl: + item.action = "episodios" + item.contentTitle = "" + item.contentSerieName = scrapedtitle + title = scrapedtitle + if scrapedyear: + item.infoLabels['year'] = int(scrapedyear) + title += " (%s)" %item.infoLabels['year'] + itemlist.append(item.clone(thumbnail = scrapedthumbnail, + title = title, + url = scrapedurl + )) + tmdb.set_infoLabels(itemlist) + return itemlist def mainpage(item): logger.info() - itemlist = [] - data1 = httptools.downloadpage(item.url).data data1 = re.sub(r"\n|\r|\t|\s{2}| ", "", data1) - if item.title=="Más Populares": - patron_sec='<a class="lglossary" data-type.+?>(.+?)<\/ul>' - patron='<img .+? src="([^"]+)".+?<a href="([^"]+)".+?>([^"]+)<\/a>' #scrapedthumbnail, #scrapedurl, #scrapedtitle - if item.title=="Categorías": - patron_sec='<ul id="main_header".+?>(.+?)<\/ul><\/div>' - patron='<a href="([^"]+)">([^"]+)<\/a>'#scrapedurl, #scrapedtitle - + patron_sec='<ul id="main_header".+?>(.+?)<\/ul><\/div>' + patron='<a href="([^"]+)">([^"]+)<\/a>'#scrapedurl, #scrapedtitle data = scrapertools.find_single_match(data1, patron_sec) - matches = scrapertools.find_multiple_matches(data, patron) if item.title=="Géneros" or item.title=="Categorías": for scrapedurl, scrapedtitle in matches: @@ -82,11 +99,10 @@ def mainpage(item): return itemlist return itemlist + def lista(item): logger.info() - itemlist = [] - data = httptools.downloadpage(item.url).data data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) if item.title=="Peliculas Animadas": @@ -114,8 +130,8 @@ def lista(item): def episodios(item): logger.info() - itemlist = [] + infoLabels = {} data = httptools.downloadpage(item.url).data data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) data_lista = scrapertools.find_single_match(data, @@ -123,51 +139,52 @@ def episodios(item): show = item.title patron_caps = '<img alt=".+?" src="([^"]+)"><\/a><\/div><div class=".+?">([^"]+)<\/div>.+?' patron_caps += '<a .+? href="([^"]+)">([^"]+)<\/a>' - #scrapedthumbnail,#scrapedtempepi, #scrapedurl, #scrapedtitle matches = scrapertools.find_multiple_matches(data_lista, patron_caps) for scrapedthumbnail, scrapedtempepi, scrapedurl, scrapedtitle in matches: tempepi=scrapedtempepi.split(" - ") if tempepi[0]=='Pel': tempepi[0]=0 title="{0}x{1} - ({2})".format(tempepi[0], tempepi[1].zfill(2), scrapedtitle) - itemlist.append(Item(channel=item.channel, thumbnail=scrapedthumbnail, - action="findvideos", title=title, url=scrapedurl, show=show)) - + item.infoLabels["season"] = tempepi[0] + item.infoLabels["episode"] = tempepi[1] + itemlist.append(item.clone(thumbnail=scrapedthumbnail, + action="findvideos", title=title, url=scrapedurl)) if config.get_videolibrary_support() and len(itemlist) > 0: itemlist.append(Item(channel=item.channel, title="[COLOR yellow]Añadir " + show + " a la videoteca[/COLOR]", url=item.url, action="add_serie_to_library", extra="episodios", show=show)) - - return itemlist def findvideos(item): logger.info() - import base64 - itemlist = [] - data = httptools.downloadpage(item.url).data data = re.sub(r"\n|\r|\t|\s{2}| ", "", data) data1 = scrapertools.find_single_match(data, '<div id="playex" .+?>(.+?)<\/nav>?\s<\/div><\/div>') patron = "changeLink\('([^']+)'\)" matches = re.compile(patron, re.DOTALL).findall(data1) - for url64 in matches: - url =base64.b64decode(url64) - if 'danimados' in url: - new_data = httptools.downloadpage('https:'+url.replace('stream', 'stream_iframe')).data - url = scrapertools.find_single_match(new_data, '<source src="([^"]+)"') - - itemlist.append(item.clone(title='%s',url=url, action="play")) - + url1 =base64.b64decode(url64) + if 'danimados' in url1: + new_data = httptools.downloadpage('https:'+url1.replace('stream', 'stream_iframe')).data + logger.info("Intel33 %s" %new_data) + url = scrapertools.find_single_match(new_data, "sources: \[\{file:'([^']+)") + if "zkstream" in url: + url1 = httptools.downloadpage(url, follow_redirects=False, only_headers=True).headers.get("location", "") + else: + url1 = url + itemlist.append(item.clone(title='%s',url=url1, action="play")) + tmdb.set_infoLabels(itemlist) itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize()) - if config.get_videolibrary_support() and len(itemlist) > 0 and item.contentType=="movie" and item.contentChannel!='videolibrary': itemlist.append( item.clone(channel=item.channel, title='[COLOR yellow]Añadir esta pelicula a la videoteca[/COLOR]', url=item.url, - action="add_pelicula_to_library", contentTitle=item.show)) - + action="add_pelicula_to_library")) autoplay.start(itemlist, item) return itemlist + + +def play(item): + item.thumbnail = item.contentThumbnail + return [item] diff --git a/plugin.video.alfa/channels/sipeliculas.py b/plugin.video.alfa/channels/sipeliculas.py index 7b4ca689..66167233 100755 --- a/plugin.video.alfa/channels/sipeliculas.py +++ b/plugin.video.alfa/channels/sipeliculas.py @@ -1,8 +1,5 @@ # -*- coding: utf-8 -*- -import re -import urlparse - from core import httptools from core import scrapertools from core import servertools @@ -12,10 +9,8 @@ from platformcode import logger host = 'http://www.sipeliculas.com' - def mainlist(item): logger.info() - itemlist = [] itemlist.append(item.clone(title="Novedades", action="lista", url=host + "/cartelera/")) itemlist.append(item.clone(title="Actualizadas", action="lista", url=host + "/peliculas-actualizadas/")) @@ -24,7 +19,6 @@ def mainlist(item): itemlist.append(item.clone(title="Año", action="menuseccion", url=host, extra="/estrenos-gratis/")) itemlist.append(item.clone(title="Alfabetico", action="alfabetica", url=host + '/mirar/')) itemlist.append(item.clone(title="Buscar", action="search", url=host + "/ver/")) - return itemlist @@ -33,7 +27,6 @@ def alfabetica(item): itemlist = [] for letra in "1abcdefghijklmnopqrstuvwxyz": itemlist.append(item.clone(title=letra.upper(), url=item.url + letra, action="lista")) - return itemlist @@ -42,7 +35,6 @@ def menuseccion(item): itemlist = [] seccion = item.extra data = httptools.downloadpage(item.url).data - if seccion == '/online/': data = scrapertools.find_single_match(data, '<h2 class="[^"]+"><i class="[^"]+"></i>Películas por géneros<u class="[^"]+"></u></h2>(.*?)<ul class="abc">') @@ -50,8 +42,7 @@ def menuseccion(item): elif seccion == '/estrenos-gratis/': data = scrapertools.find_single_match(data, '<ul class="lista-anio" id="lista-anio">(.*?)</ul>') patron = '<li ><a href="([^"]+)" title="[^"]+">([^<]+)</a></li>' - - matches = re.compile(patron, re.DOTALL).findall(data) + matches = scrapertools.find_multiple_matches(data, patron) for scrapedurl, extra in matches: itemlist.append(Item(channel=item.channel, action='lista', title=extra, url=scrapedurl)) return itemlist @@ -64,22 +55,19 @@ def lista(item): listado = scrapertools.find_single_match(data, '<div id="sipeliculas" class="borde"><div class="izquierda">(.*?)<div class="derecha"><h2') patron = '<a class="i" href="(.*?)".*?src="(.*?)".*?title=.*?>(.*?)<.*?span>(.*?)<.*?<p><span>(.*?)<' - - matches = re.compile(patron, re.DOTALL).findall(listado) - + matches = scrapertools.find_multiple_matches(listado, patron) for scrapedurl, scrapedthumbnail, scrapedtitle, year, plot in matches: - itemlist.append(Item(channel=item.channel, action='findvideos', title=scrapedtitle, url=scrapedurl, - thumbnail=scrapedthumbnail, plot=plot, contentTitle=scrapedtitle, extra=item.extra, + itemlist.append(Item(channel=item.channel, action='findvideos', title=scrapedtitle + " (%s)" %year, url=scrapedurl, + thumbnail=scrapedthumbnail, contentTitle=scrapedtitle, extra=item.extra, infoLabels ={'year':year})) - tmdb.set_infoLabels_itemlist(itemlist, seekTmdb=True) # Paginacion if itemlist != []: patron = '<li[^<]+<a href="([^"]+)" title="[^"]+">Siguiente[^<]+</a></li>' - matches = re.compile(patron, re.DOTALL).findall(data) + matches = scrapertools.find_multiple_matches(data, patron) if matches: itemlist.append( - item.clone(title="Pagina Siguiente", action='lista', url=urlparse.urljoin(host, matches[0]))) + item.clone(title="Pagina Siguiente", action='lista', url=host + "/" + matches[0])) return itemlist @@ -97,11 +85,10 @@ def findvideos(item): logger.info() itemlist = [] data = httptools.downloadpage(item.url).data - listado1 = scrapertools.find_single_match(data, '<div class="links" id="ver-mas-opciones"><h2 class="h2"><i class="[^"]+"></i>[^<]+</h2><ul class="opciones">(.*?)</ul>') patron1 = '<li ><a id="([^"]+)" rel="nofollow" href="([^"]+)" title="[^"]+" alt="([^"]+)"><span class="opcion"><i class="[^"]+"></i><u>[^<]+</u>[^<]+</span><span class="ico"><img src="[^"]+" alt="[^"]+"/>[^<]+</span><span>([^"]+)</span><span>([^"]+)</span></a></li>' - matches = matches = re.compile(patron1, re.DOTALL).findall(listado1) + matches = matches = scrapertools.find_multiple_matches(listado1, patron1) for vidId, vidUrl, vidServer, language, quality in matches: server = servertools.get_server_name(vidServer) if 'Sub' in language: @@ -109,39 +96,32 @@ def findvideos(item): itemlist.append(Item(channel=item.channel, action='play', url=vidUrl, extra=vidId, title='Ver en ' + vidServer + ' | ' + language + ' | ' + quality, thumbnail=item.thumbnail, server=server, language=language, quality=quality )) - listado2 = scrapertools.find_single_match(data, '<ul class="opciones-tab">(.*?)</ul>') patron2 = '<li ><a id="([^"]+)" rel="nofollow" href="([^"]+)" title="[^"]+" alt="([^"]+)"><img src="[^"]+" alt="[^"]+"/>[^<]+</a></li>' - matches = matches = re.compile(patron2, re.DOTALL).findall(listado2) + matches = matches = scrapertools.find_multiple_matches(listado2, patron2) for vidId, vidUrl, vidServer in matches: server = servertools.get_server_name(vidServer) itemlist.append(Item(channel=item.channel, action='play', url=vidUrl, extra=vidId, title='Ver en ' + vidServer, thumbnail=item.thumbnail, server=server)) - for videoitem in itemlist: videoitem.fulltitle = item.title videoitem.folder = False - return itemlist def play(item): logger.info() itemlist = [] - - video = httptools.downloadpage(host + '/ajax.public.php', 'acc=ver_opc&f=' + item.extra).data - logger.info("video=" + video) - enlaces = servertools.findvideos(video) - if enlaces: - logger.info("server=" + enlaces[0][2]) - thumbnail = servertools.guess_server_thumbnail(video) - # Añade al listado de XBMC + data = httptools.downloadpage(item.url).data + video = scrapertools.find_single_match(data, '</div><iframe src="([^"]+)') + if video: itemlist.append( - Item(channel=item.channel, action="play", title=item.title, fulltitle=item.fulltitle, url=enlaces[0][1], - server=enlaces[0][2], thumbnail=thumbnail, folder=False)) - + item.clone(action="play", url=video, folder=False, server="")) + itemlist = servertools.get_servers_itemlist(itemlist) + itemlist[0].thumbnail = item.contentThumbnail return itemlist + def newest(categoria): logger.info() itemlist = [] @@ -155,16 +135,13 @@ def newest(categoria): item.url = host + "/online/terror/" else: return [] - itemlist = lista(item) if itemlist[-1].title == "» Siguiente »": itemlist.pop() - # Se captura la excepción, para no interrumpir al canal novedades si un canal falla except: import sys for line in sys.exc_info(): logger.error("{0}".format(line)) return [] - return itemlist diff --git a/plugin.video.alfa/servers/tusfiles.json b/plugin.video.alfa/servers/megadrive.json old mode 100755 new mode 100644 similarity index 68% rename from plugin.video.alfa/servers/tusfiles.json rename to plugin.video.alfa/servers/megadrive.json index a6fda066..da5d8b62 --- a/plugin.video.alfa/servers/tusfiles.json +++ b/plugin.video.alfa/servers/megadrive.json @@ -4,18 +4,14 @@ "ignore_urls": [], "patterns": [ { - "pattern": "http://tusfiles.org/\\?([A-z0-9]+)", - "url": "http://tusfiles.org/?\\1/" - }, - { - "pattern": "tusfiles.net/(?:embed-|)([A-z0-9]+)", - "url": "http://tusfiles.net/\\1" + "pattern": "megadrive.co/embed/([A-z0-9]+)", + "url": "https://megadrive.co/embed/\\1" } ] }, "free": true, - "id": "tusfiles", - "name": "tusfiles", + "id": "megadrive", + "name": "megadrive", "settings": [ { "default": false, @@ -41,5 +37,6 @@ "type": "list", "visible": false } - ] -} \ No newline at end of file + ], + "thumbnail": "https://s8.postimg.cc/kr5olxmad/megadrive1.png" +} diff --git a/plugin.video.alfa/servers/megadrive.py b/plugin.video.alfa/servers/megadrive.py new file mode 100644 index 00000000..d9b82c33 --- /dev/null +++ b/plugin.video.alfa/servers/megadrive.py @@ -0,0 +1,27 @@ +# -*- coding: utf-8 -*- + +from core import httptools +from core import scrapertools +from platformcode import logger + + +def test_video_exists(page_url): + logger.info("(page_url='%s')" % page_url) + data = httptools.downloadpage(page_url).data + if "no longer exists" in data or "to copyright issues" in data: + return False, "[Megadrive] El video ha sido borrado" + if "please+try+again+later." in data: + return False, "[Megadrive] Error de Megadrive, no se puede generar el enlace al video" + if "File has been removed due to inactivity" in data: + return False, "[Megadrive] El archivo ha sido removido por inactividad" + return True, "" + + +def get_video_url(page_url, user="", password="", video_password=""): + logger.info("(page_url='%s')" % page_url) + data = httptools.downloadpage(page_url).data + video_urls = [] + videourl = scrapertools.find_single_match(data, "<source.*?src='([^']+)") + video_urls.append([".MP4 [megadrive]", videourl]) + + return video_urls From 3a76265a2d677364ddd84ee735573d3d33d6ab4e Mon Sep 17 00:00:00 2001 From: pipcat <pip@pipcat.com> Date: Wed, 12 Sep 2018 09:34:58 +0200 Subject: [PATCH 19/34] =?UTF-8?q?Di=C3=A1logo=20selecci=C3=B3n=20de=20cana?= =?UTF-8?q?les=20a=20buscar?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- plugin.video.alfa/channels/peliculasgratis.py | 1 + plugin.video.alfa/channels/search.py | 88 ++++++++++++++++++- plugin.video.alfa/channels/seriecanal.py | 2 +- plugin.video.alfa/core/tmdb.py | 2 +- 4 files changed, 87 insertions(+), 6 deletions(-) diff --git a/plugin.video.alfa/channels/peliculasgratis.py b/plugin.video.alfa/channels/peliculasgratis.py index ebc98e85..696ae554 100644 --- a/plugin.video.alfa/channels/peliculasgratis.py +++ b/plugin.video.alfa/channels/peliculasgratis.py @@ -89,6 +89,7 @@ def search(item, texto): logger.info() texto = texto.replace(" ", "+") item.url = host + "/search/%s" % texto + if item.contentType == '': item.contentType = 'movie' try: return scraper(item) # Se captura la excepción, para no interrumpir al buscador global si un canal falla diff --git a/plugin.video.alfa/channels/search.py b/plugin.video.alfa/channels/search.py index 73e3498a..5daecf53 100644 --- a/plugin.video.alfa/channels/search.py +++ b/plugin.video.alfa/channels/search.py @@ -143,6 +143,85 @@ def settings(item): def setting_channel(item): + if config.get_platform(True)['num_version'] >= 17.0: # A partir de Kodi 16 se puede usar multiselect, y de 17 con preselect + return setting_channel_new(item) + else: + return setting_channel_old(item) + +def setting_channel_new(item): + import channelselector, xbmcgui + from core import channeltools + + # Cargar lista de opciones (canales activos del usuario y que permitan búsqueda global) + # ------------------------ + lista = []; ids = []; lista_lang = [] + channels_list = channelselector.filterchannels('all') + for channel in channels_list: + channel_parameters = channeltools.get_channel_parameters(channel.channel) + + # No incluir si en la configuracion del canal no existe "include_in_global_search" + if not channel_parameters['include_in_global_search']: + continue + + lbl = '%s' % channel_parameters['language'] + lbl += ' %s' % ', '.join(config.get_localized_category(categ) for categ in channel_parameters['categories']) + + it = xbmcgui.ListItem(channel.title, lbl) + it.setArt({ 'thumb': channel.thumbnail, 'fanart': channel.fanart }) + lista.append(it) + ids.append(channel.channel) + lista_lang.append(channel_parameters['language']) + + # Diálogo para pre-seleccionar + # ---------------------------- + preselecciones_std = ['Modificar selección actual', 'Modificar partiendo de Todos', 'Modificar partiendo de Ninguno', 'Modificar partiendo de Castellano', 'Modificar partiendo de Latino'] + if item.action == 'setting_channel': + # Configuración de los canales incluídos en la búsqueda + preselecciones = preselecciones_std + presel_values = [1, 2, 3, 4, 5] + else: + # Llamada desde "buscar en otros canales" (se puede saltar la selección e ir directo a la búsqueda) + preselecciones = ['Buscar con la selección actual'] + preselecciones_std + presel_values = [0, 1, 2, 3, 4, 5] + + ret = platformtools.dialog_select(config.get_localized_string(59994), preselecciones) + if ret == -1: return False # pedido cancel + if presel_values[ret] == 0: return True # continuar sin modificar + elif presel_values[ret] == 3: preselect = [] + elif presel_values[ret] == 2: preselect = range(len(ids)) + elif presel_values[ret] in [4, 5]: + busca = 'cast' if presel_values[ret] == 4 else 'lat' + preselect = [] + for i, lg in enumerate(lista_lang): + if busca in lg or '*' in lg: + preselect.append(i) + else: + preselect = [] + for i, canal in enumerate(ids): + channel_status = config.get_setting('include_in_global_search', canal) + if channel_status: + preselect.append(i) + + # Diálogo para seleccionar + # ------------------------ + ret = xbmcgui.Dialog().multiselect(config.get_localized_string(59994), lista, preselect=preselect, useDetails=True) + if ret == None: return False # pedido cancel + seleccionados = [ids[i] for i in ret] + + # Guardar cambios en canales para la búsqueda + # ------------------------------------------- + for canal in ids: + channel_status = config.get_setting('include_in_global_search', canal) + if channel_status is None: channel_status = True + + if channel_status and canal not in seleccionados: + config.set_setting('include_in_global_search', False, canal) + elif not channel_status and canal in seleccionados: + config.set_setting('include_in_global_search', True, canal) + + return True + +def setting_channel_old(item): channels_path = os.path.join(config.get_runtime_path(), "channels", '*.json') channel_language = config.get_setting("channel_language", default="all") @@ -204,6 +283,7 @@ def save_settings(item, dict_values): config.set_setting("include_in_global_search", dict_values[v], v) progreso.close() + return True def cb_custom_button(item, dict_values): @@ -354,8 +434,8 @@ def do_search(item, categories=None): categories = ["Películas"] setting_item = Item(channel=item.channel, title=config.get_localized_string(59994), folder=False, thumbnail=get_thumb("search.png")) - setting_channel(setting_item) - + if not setting_channel(setting_item): + return False if categories is None: categories = [] @@ -474,8 +554,8 @@ def do_search(item, categories=None): # es compatible tanto con versiones antiguas de python como nuevas if multithread: pendent = [a for a in threads if a.isAlive()] - t = float(100) / len(pendent) - while pendent: + if len(pendent) > 0: t = float(100) / len(pendent) + while len(pendent) > 0: index = (len(threads) - len(pendent)) + 1 percentage = int(math.ceil(index * t)) diff --git a/plugin.video.alfa/channels/seriecanal.py b/plugin.video.alfa/channels/seriecanal.py index 0ac2bfb4..34c7adf9 100644 --- a/plugin.video.alfa/channels/seriecanal.py +++ b/plugin.video.alfa/channels/seriecanal.py @@ -9,7 +9,7 @@ from core import servertools from platformcode import config, logger __modo_grafico__ = config.get_setting('modo_grafico', "seriecanal") -__perfil__ = config.get_setting('perfil', "descargasmix") +__perfil__ = config.get_setting('perfil', "seriecanal") # Fijar perfil de color perfil = [['0xFFFFE6CC', '0xFFFFCE9C', '0xFF994D00'], diff --git a/plugin.video.alfa/core/tmdb.py b/plugin.video.alfa/core/tmdb.py index b8c1ccb0..72c709e5 100644 --- a/plugin.video.alfa/core/tmdb.py +++ b/plugin.video.alfa/core/tmdb.py @@ -319,7 +319,7 @@ def set_infoLabels_item(item, seekTmdb=True, idioma_busqueda='es', lock=None): __leer_datos(otmdb_global) - if lock: + if lock and lock.locked(): lock.release() if item.infoLabels['episode']: From 4c949d5d89b858d405acee8974caa439b5d7f8a4 Mon Sep 17 00:00:00 2001 From: Kingbox <37674310+lopezvg@users.noreply.github.com> Date: Wed, 12 Sep 2018 11:51:44 +0200 Subject: [PATCH 20/34] =?UTF-8?q?Generictools:=20c=C3=A1lculo=20tama=C3=B1?= =?UTF-8?q?o=20.torrent?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- plugin.video.alfa/channels/divxtotal.py | 26 +++- plugin.video.alfa/channels/elitetorrent.py | 31 +++-- plugin.video.alfa/channels/estrenosgo.py | 29 +++- plugin.video.alfa/channels/grantorrent.py | 16 ++- plugin.video.alfa/channels/mejortorrent1.py | 16 ++- plugin.video.alfa/channels/newpct1.py | 42 ++++-- plugin.video.alfa/lib/generictools.py | 141 ++++++++++++++++++-- 7 files changed, 258 insertions(+), 43 deletions(-) diff --git a/plugin.video.alfa/channels/divxtotal.py b/plugin.video.alfa/channels/divxtotal.py index 8e1ba213..1a0deaf5 100644 --- a/plugin.video.alfa/channels/divxtotal.py +++ b/plugin.video.alfa/channels/divxtotal.py @@ -519,17 +519,35 @@ def findvideos(item): item, itemlist = generictools.post_tmdb_findvideos(item, itemlist) #Ahora tratamos los enlaces .torrent - for scrapedurl in matches: #leemos los torrents con la diferentes calidades + for scrapedurl in matches: #leemos los torrents con la diferentes calidades #Generamos una copia de Item para trabajar sobre ella item_local = item.clone() + #Buscamos si ya tiene tamaño, si no, los buscamos en el archivo .torrent + size = scrapertools.find_single_match(item_local.quality, '\s\[(\d+,?\d*?\s\w\s?[b|B])\]') + if not size: + size = generictools.get_torrent_size(item_local.url) #Buscamos el tamaño en el .torrent + if size: + item_local.title = re.sub(r'\s\[\d+,?\d*?\s\w[b|B]\]', '', item_local.title) #Quitamos size de título, si lo traía + item_local.title = '%s [%s]' % (item_local.title, size) #Agregamos size al final del título + size = size.replace('GB', 'G B').replace('Gb', 'G b').replace('MB', 'M B').replace('Mb', 'M b') + item_local.quality = re.sub(r'\s\[\d+,?\d*?\s\w\s?[b|B]\]', '', item_local.quality) #Quitamos size de calidad, si lo traía + item_local.quality = '%s [%s]' % (item_local.quality, size) #Agregamos size al final de la calidad + #Ahora pintamos el link del Torrent item_local.url = scrapedurl if host not in item_local.url and host.replace('https', 'http') not in item_local.url : item_local.url = host + item_local.url - item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (item_local.quality, str(item_local.language)) #Preparamos título de Torrent - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title) #Quitamos etiquetas vacías - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', item_local.title) #Quitamos colores vacíos + item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (item_local.quality, str(item_local.language)) + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.quality) + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.quality) + item_local.quality = item_local.quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.alive = "??" #Calidad del link sin verificar item_local.action = "play" #Visualizar vídeo item_local.server = "torrent" #Seridor Torrent diff --git a/plugin.video.alfa/channels/elitetorrent.py b/plugin.video.alfa/channels/elitetorrent.py index 6842c037..1a052a4a 100644 --- a/plugin.video.alfa/channels/elitetorrent.py +++ b/plugin.video.alfa/channels/elitetorrent.py @@ -171,8 +171,11 @@ def listado(item): #Limpiamos el título de la basura innecesaria title = title.replace("Dual", "").replace("dual", "").replace("Subtitulada", "").replace("subtitulada", "").replace("Subt", "").replace("subt", "").replace("Sub", "").replace("sub", "").replace("(Proper)", "").replace("(proper)", "").replace("Proper", "").replace("proper", "").replace("#", "").replace("(Latino)", "").replace("Latino", "") - title = title.replace("- HDRip", "").replace("(HDRip)", "").replace("- Hdrip", "").replace("(microHD)", "").replace("(DVDRip)", "").replace("(HDRip)", "").replace("(BR-LINE)", "").replace("(HDTS-SCREENER)", "").replace("(BDRip)", "").replace("(BR-Screener)", "").replace("(DVDScreener)", "").replace("TS-Screener", "").replace(" TS", "").replace(" Ts", "") + title = title.replace("- HDRip", "").replace("(HDRip)", "").replace("- Hdrip", "").replace("(microHD)", "").replace("(DVDRip)", "").replace("(HDRip)", "").replace("(BR-LINE)", "").replace("(HDTS-SCREENER)", "").replace("(BDRip)", "").replace("(BR-Screener)", "").replace("(DVDScreener)", "").replace("TS-Screener", "").replace(" TS", "").replace(" Ts", "").replace("temporada", "").replace("Temporada", "").replace("capitulo", "").replace("Capitulo", "") + + title = re.sub(r'(?:\d+)?x.?\s?\d+', '', title) title = re.sub(r'\??\s?\d*?\&.*', '', title).title().strip() + item_local.from_title = title #Guardamos esta etiqueta para posible desambiguación de título if item_local.extra == "peliculas": #preparamos Item para películas @@ -190,16 +193,17 @@ def listado(item): item_local.contentType = "episode" item_local.extra = "series" epi_mult = scrapertools.find_single_match(item_local.url, r'cap.*?-\d+-al-(\d+)') - item_local.contentSeason = scrapertools.find_single_match(item_local.url, r'temp.*?-(\d+)') + item_local.contentSeason = scrapertools.find_single_match(item_local.url, r'temporada-(\d+)') item_local.contentEpisodeNumber = scrapertools.find_single_match(item_local.url, r'cap.*?-(\d+)') if not item_local.contentSeason: item_local.contentSeason = scrapertools.find_single_match(item_local.url, r'-(\d+)[x|X]\d+') if not item_local.contentEpisodeNumber: item_local.contentEpisodeNumber = scrapertools.find_single_match(item_local.url, r'-\d+[x|X](\d+)') - if item_local.contentSeason < 1: - item_local.contentSeason = 1 + if not item_local.contentSeason or item_local.contentSeason < 1: + item_local.contentSeason = 0 if item_local.contentEpisodeNumber < 1: item_local.contentEpisodeNumber = 1 + item_local.contentSerieName = title if epi_mult: title = "%sx%s al %s -" % (item_local.contentSeason, str(item_local.contentEpisodeNumber).zfill(2), str(epi_mult).zfill(2)) #Creamos un título con el rango de episodios @@ -269,11 +273,11 @@ def findvideos(item): #data = unicode(data, "utf-8", errors="replace") #Añadimos el tamaño para todos - size = scrapertools.find_single_match(item.quality, '\s\[(\d+,?\d*?\s\w[b|B]s)\]') + size = scrapertools.find_single_match(item.quality, '\s\[(\d+,?\d*?\s\w\s?[b|B]s)\]') if size: item.title = re.sub('\s\[\d+,?\d*?\s\w[b|B]s\]', '', item.title) #Quitamos size de título, si lo traía item.title = '%s [%s]' % (item.title, size) #Agregamos size al final del título - item.quality = re.sub('\s\[\d+,?\d*?\s\w[b|B]s\]', '', item.quality) #Quitamos size de calidad, si lo traía + item.quality = re.sub('\s\[\d+,?\d*?\s\w\s?[b|B]s\]', '', item.quality) #Quitamos size de calidad, si lo traía patron_t = '<div class="enlace_descarga".*?<a href="(.*?\.torrent)"' link_torrent = scrapertools.find_single_match(data, patron_t) @@ -299,9 +303,11 @@ def findvideos(item): #Llamamos al método para crear el título general del vídeo, con toda la información obtenida de TMDB item, itemlist = generictools.post_tmdb_findvideos(item, itemlist) + if not size: + size = generictools.get_torrent_size(link_torrent) #Buscamos el tamaño en el .torrent if size: item.quality = '%s [%s]' % (item.quality, size) #Agregamos size al final de calidad - item.quality = item.quality.replace("G", "G ").replace("M", "M ") #Se evita la palabra reservada en Unify + item.quality = item.quality.replace("GB", "G B").replace("MB", "M B") #Se evita la palabra reservada en Unify #Generamos una copia de Item para trabajar sobre ella item_local = item.clone() @@ -313,8 +319,15 @@ def findvideos(item): item_local.quality += "[Torrent]" item_local.url = link_torrent item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (item_local.quality, str(item_local.language)) #Preparamos título de Torrent - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title) #Quitamos etiquetas vacías - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', item_local.title) #Quitamos colores vacíos + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.quality) + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.quality) + item_local.quality = item_local.quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.alive = "??" #Calidad del link sin verificar item_local.action = "play" #Visualizar vídeo item_local.server = "torrent" #Seridor Torrent diff --git a/plugin.video.alfa/channels/estrenosgo.py b/plugin.video.alfa/channels/estrenosgo.py index 72cfa220..52b00848 100644 --- a/plugin.video.alfa/channels/estrenosgo.py +++ b/plugin.video.alfa/channels/estrenosgo.py @@ -682,7 +682,7 @@ def findvideos(item): #Ahora tratamos los enlaces .torrent itemlist_alt = [] #Usamos una lista intermedia para poder ordenar los episodios if matches_torrent: - for scrapedurl, scrapedquality, scrapedlang in matches_torrent: #leemos los torrents con la diferentes calidades + for scrapedurl, scrapedquality, scrapedlang in matches_torrent: #leemos los torrents con la diferentes calidades #Generamos una copia de Item para trabajar sobre ella item_local = item.clone() @@ -756,9 +756,19 @@ def findvideos(item): #Ahora pintamos el link del Torrent item_local.url = host + scrapedtorrent - item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (quality, str(item_local.language)) #Preparamos título de Torrent - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title) #Quitamos etiquetas vacías - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', item_local.title) #Quitamos colores vacíos + size = generictools.get_torrent_size(item_local.url) #Buscamos el tamaño en el .torrent + if size: + quality += ' [%s]' % size + item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (quality, str(item_local.language)) + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', quality) + quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', quality) + quality = quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.alive = "??" #Calidad del link sin verificar item_local.action = "play" #Visualizar vídeo item_local.server = "torrent" #Seridor Torrent @@ -896,8 +906,15 @@ def findvideos(item): #Ahora pintamos el link Directo item_local.url = enlace - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title) #Quitamos etiquetas vacías - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', item_local.title) #Quitamos colores vacíos + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', quality) + quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', quality) + quality = quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.action = "play" #Visualizar vídeo item_local.server = servidor #Seridor Directo diff --git a/plugin.video.alfa/channels/grantorrent.py b/plugin.video.alfa/channels/grantorrent.py index 1e25ceef..072f81fb 100644 --- a/plugin.video.alfa/channels/grantorrent.py +++ b/plugin.video.alfa/channels/grantorrent.py @@ -474,14 +474,17 @@ def findvideos(item): #Añadimos la duración, que estará en item.quility if scrapertools.find_single_match(item.quality, '(\[\d+:\d+)') and not scrapertools.find_single_match(item_local.quality, '(\[\d+:\d+)'): item_local.quality = '%s [/COLOR][COLOR white][%s h]' % (item_local.quality, scrapertools.find_single_match(item.quality, '(\d+:\d+)')) + #if size and item_local.contentType != "episode": + if not size: + size = generictools.get_torrent_size(scrapedurl) #Buscamos el tamaño en el .torrent if size: size = size.replace(".", ",").replace("B,", " B").replace("b,", " b") if '[/COLOR][COLOR white]' in item_local.quality: item_local.quality = '%s [%s]' % (item_local.quality, size) else: item_local.quality = '%s [/COLOR][COLOR white][%s]' % (item_local.quality, size) - if item_local.action == 'show_result': #Viene de una búsqueda global + if item_local.action == 'show_result': #Viene de una búsqueda global channel = item_local.channel.capitalize() if item_local.from_channel: channel = item_local.from_channel.capitalize() @@ -491,8 +494,15 @@ def findvideos(item): if scrapedurl: item_local.url = scrapedurl item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (item_local.quality, str(item_local.language)) #Preparamos título de Torrent - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title).strip() #Quitamos etiquetas vacías - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', item_local.title).strip() #Quitamos colores vacíos + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.quality) + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.quality) + item_local.quality = item_local.quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.alive = "??" #Calidad del link sin verificar item_local.action = "play" #Visualizar vídeo item_local.server = "torrent" #Seridor Torrent diff --git a/plugin.video.alfa/channels/mejortorrent1.py b/plugin.video.alfa/channels/mejortorrent1.py index cc9966c0..feffe91b 100644 --- a/plugin.video.alfa/channels/mejortorrent1.py +++ b/plugin.video.alfa/channels/mejortorrent1.py @@ -845,18 +845,21 @@ def findvideos(item): # Poner la calidad, si es necesario if not item_local.quality: + item_local.quality = '' if scrapertools.find_single_match(data, '<b>Formato:<\/b>&\w+;\s?([^<]+)<br>'): item_local.quality = scrapertools.find_single_match(data, '<b>Formato:<\/b>&\w+;\s?([^<]+)<br>') elif "hdtv" in item_local.url.lower() or "720p" in item_local.url.lower() or "1080p" in item_local.url.lower() or "4k" in item_local.url.lower(): item_local.quality = scrapertools.find_single_match(item_local.url, '.*?_([H|7|1|4].*?)\.torrent') item_local.quality = item_local.quality.replace("_", " ") - + # Extrae el tamaño del vídeo if scrapertools.find_single_match(data, '<b>Tama.*?:<\/b>&\w+;\s?([^<]+B)<?'): size = scrapertools.find_single_match(data, '<b>Tama.*?:<\/b>&\w+;\s?([^<]+B)<?') else: size = scrapertools.find_single_match(item_local.url, '(\d{1,3},\d{1,2}?\w+)\.torrent') size = size.upper().replace(".", ",").replace("G", " G ").replace("M", " M ") #sustituimos . por , porque Unify lo borra + if not size: + size = generictools.get_torrent_size(item_local.url) #Buscamos el tamaño en el .torrent if size: item_local.title = re.sub('\s\[\d+,?\d*?\s\w[b|B]\]', '', item_local.title) #Quitamos size de título, si lo traía item_local.title = '%s [%s]' % (item_local.title, size) #Agregamos size al final del título @@ -866,8 +869,15 @@ def findvideos(item): #Ahora pintamos el link del Torrent, si lo hay if item_local.url: # Hay Torrent ? item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (item_local.quality, str(item_local.language)) #Preparamos título de Torrent - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title) #Quitamos etiquetas vacías - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', item_local.title) #Quitamos colores vacíos + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.quality) + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.quality) + item_local.quality = item_local.quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.alive = "??" #Calidad del link sin verificar item_local.action = "play" #Visualizar vídeo item_local.server = "torrent" #Seridor Torrent diff --git a/plugin.video.alfa/channels/newpct1.py b/plugin.video.alfa/channels/newpct1.py index d7e9cfe9..354f1b67 100644 --- a/plugin.video.alfa/channels/newpct1.py +++ b/plugin.video.alfa/channels/newpct1.py @@ -1368,12 +1368,14 @@ def findvideos(item): size = scrapertools.find_single_match(data, '<div class="fichas-box"><div class="entry-right"><div style="[^"]+"><span class="[^"]+"><strong>Size:<\/strong>?\s(\d+?\.?\d*?\s\w[b|B])<\/span>') size = size.replace(".", ",") #sustituimos . por , porque Unify lo borra if not size: - size = scrapertools.find_single_match(item.quality, '\s\[(\d+,?\d*?\s\w[b|B])\]') + size = scrapertools.find_single_match(item.quality, '\s\[(\d+,?\d*?\s\w\s?[b|B])\]') + if not size: + size = generictools.get_torrent_size(item.url) #Buscamos el tamaño en el .torrent if size: item.title = re.sub(r'\s\[\d+,?\d*?\s\w[b|B]\]', '', item.title) #Quitamos size de título, si lo traía item.title = '%s [%s]' % (item.title, size) #Agregamos size al final del título size = size.replace('GB', 'G B').replace('Gb', 'G b').replace('MB', 'M B').replace('Mb', 'M b') - item.quality = re.sub(r'\s\[\d+,?\d*?\s\w[b|B]\]', '', item.quality) #Quitamos size de calidad, si lo traía + item.quality = re.sub(r'\s\[\d+,?\d*?\s\w\s?[b|B]\]', '', item.quality) #Quitamos size de calidad, si lo traía #Llamamos al método para crear el título general del vídeo, con toda la información obtenida de TMDB item, itemlist = generictools.post_tmdb_findvideos(item, itemlist) @@ -1399,8 +1401,15 @@ def findvideos(item): else: quality = item_local.quality item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (quality, str(item_local.language)) #Preparamos título de Torrent - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title).strip() #Quitamos etiquetas vacías - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', item_local.title).strip() #Quitamos colores vacíos + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', quality) + quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', quality) + quality = quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.alive = "??" #Calidad del link sin verificar item_local.action = "play" #Visualizar vídeo item_local.server = "torrent" #Servidor @@ -1485,9 +1494,15 @@ def findvideos(item): item_local.action = "play" item_local.server = servidor item_local.url = enlace - item_local.title = item_local.title.replace("[]", "").strip() - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title).strip() - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', item_local.title).strip() + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.quality) + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.quality) + item_local.quality = item_local.quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + itemlist.append(item_local.clone()) except: @@ -1582,9 +1597,16 @@ def findvideos(item): item_local.action = "play" item_local.server = servidor item_local.url = enlace - item_local.title = parte_title.replace("[]", "").strip() - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title).strip() - item_local.title = re.sub(r'\[COLOR \w+\]-\[\/COLOR\]', '', item_local.title).strip() + item_local.title = parte_title.strip() + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.quality) + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.quality) + item_local.quality = item_local.quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + itemlist.append(item_local.clone()) except: diff --git a/plugin.video.alfa/lib/generictools.py b/plugin.video.alfa/lib/generictools.py index fa6bb57c..24d7d5c9 100644 --- a/plugin.video.alfa/lib/generictools.py +++ b/plugin.video.alfa/lib/generictools.py @@ -8,6 +8,7 @@ # ------------------------------------------------------------ import re +import os import sys import urllib import urlparse @@ -236,8 +237,7 @@ def post_tmdb_listado(item, itemlist): del item.channel_alt if item.url_alt: del item.url_alt - if item.extra2: - del item.extra2 + #Ajustamos el nombre de la categoría if not item.category_new: item.category_new = '' @@ -389,8 +389,8 @@ def post_tmdb_listado(item, itemlist): if item_local.infoLabels['episodio_titulo']: item_local.infoLabels['episodio_titulo'] = item_local.infoLabels['episodio_titulo'].replace(" []", "").strip() title = title.replace("--", "").replace(" []", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() - title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', title).strip() - title = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', title).strip() + title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', title).strip() + title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', title).strip() if item.category_new == "newest": #Viene de Novedades. Marcamos el título con el nombre del canal title += ' -%s-' % scrapertools.find_single_match(item_local.url, 'http.?\:\/\/(?:www.)?(\w+)\.\w+\/').capitalize() @@ -766,6 +766,7 @@ def post_tmdb_episodios(item, itemlist): #Si no está el título del episodio, pero sí está en "title", lo rescatamos if not item_local.infoLabels['episodio_titulo'] and item_local.infoLabels['title'].lower() != item_local.infoLabels['tvshowtitle'].lower(): item_local.infoLabels['episodio_titulo'] = item_local.infoLabels['title'] + item_local.infoLabels['episodio_titulo'] = item_local.infoLabels['episodio_titulo'].replace('GB', 'G B').replace('MB', 'M B') #Preparamos el título para que sea compatible con Añadir Serie a Videoteca if "Temporada" in item_local.title: #Compatibilizamos "Temporada" con Unify @@ -792,8 +793,8 @@ def post_tmdb_episodios(item, itemlist): item_local.infoLabels['episodio_titulo'] = item_local.infoLabels['episodio_titulo'].replace(" []", "").strip() item_local.infoLabels['title'] = item_local.infoLabels['title'].replace(" []", "").strip() item_local.title = item_local.title.replace(" []", "").strip() - item_local.title = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', item_local.title).strip() - item_local.title = re.sub(r'\s\[COLOR \w+\]-\[\/COLOR\]', '', item_local.title).strip() + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?-?\s?\]?\]\[\/COLOR\]', '', item_local.title).strip() + item_local.title = re.sub(r'\s?\[COLOR \w+\]-?\s?\[\/COLOR\]', '', item_local.title).strip() #Si la información de num. total de episodios de TMDB no es correcta, tratamos de calcularla if num_episodios < item_local.contentEpisodeNumber: @@ -1054,8 +1055,8 @@ def post_tmdb_findvideos(item, itemlist): title_gen = item.title #Limpiamos etiquetas vacías - title_gen = re.sub(r'\s\[COLOR \w+\]\[\[?\]?\]\[\/COLOR\]', '', title_gen).strip() #Quitamos etiquetas vacías - title_gen = re.sub(r'\s\[COLOR \w+\]\[\/COLOR\]', '', title_gen).strip() #Quitamos colores vacíos + title_gen = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', title_gen).strip() #Quitamos etiquetas vacías + title_gen = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', title_gen).strip() #Quitamos colores vacíos title_gen = title_gen.replace(" []", "").strip() #Quitamos etiquetas vacías title_videoteca = title_gen #Salvamos el título para Videoteca @@ -1103,7 +1104,131 @@ def post_tmdb_findvideos(item, itemlist): return (item, itemlist) + +def get_torrent_size(url): + logger.info() + + """ + + Módulo extraido del antiguo canal ZenTorrent + + Calcula el tamaño de los archivos que contienen un .torrent. Descarga el archivo .torrent en una carpeta, + lo lee y descodifica. Si contiene múltiples archivos, suma el tamaño de todos ellos + + Llamada: generictools.get_torrent_size(url) + Entrada: url: url del archivo .torrent + Salida: size: str con el tamaño y tipo de medida ( MB, GB, etc) + + """ + + def convert_size(size): + import math + if (size == 0): + return '0B' + size_name = ("B", "KB", "M B", "G B", "TB", "PB", "EB", "ZB", "YB") + i = int(math.floor(math.log(size, 1024))) + p = math.pow(1024, i) + s = round(size / p, 2) + return '%s %s' % (s, size_name[i]) + + def decode(text): + try: + src = tokenize(text) + data = decode_item(src.next, src.next()) + for token in src: # look for more tokens + raise SyntaxError("trailing junk") + except (AttributeError, ValueError, StopIteration): + try: + data = data + except: + data = src + return data + + def tokenize(text, match=re.compile("([idel])|(\d+):|(-?\d+)").match): + i = 0 + while i < len(text): + m = match(text, i) + s = m.group(m.lastindex) + i = m.end() + if m.lastindex == 2: + yield "s" + yield text[i:i + int(s)] + i = i + int(s) + else: + yield s + + def decode_item(next, token): + if token == "i": + # integer: "i" value "e" + data = int(next()) + if next() != "e": + raise ValueError + elif token == "s": + # string: "s" value (virtual tokens) + data = next() + elif token == "l" or token == "d": + # container: "l" (or "d") values "e" + data = [] + tok = next() + while tok != "e": + data.append(decode_item(next, tok)) + tok = next() + if token == "d": + data = dict(zip(data[0::2], data[1::2])) + else: + raise ValueError + return data + + + #Móludo principal + size = "" + try: + torrents_path = config.get_videolibrary_path() + '/torrents' #path para dejar el .torrent + + if not os.path.exists(torrents_path): + os.mkdir(torrents_path) #si no está la carpeta la creamos + + urllib.URLopener.version = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36 SE 2.X MetaSr 1.0' + urllib.urlretrieve(url, torrents_path + "/generictools.torrent") #desacargamos el .torrent a la carpeta + torrent_file = open(torrents_path + "/generictools.torrent", "rb").read() #leemos el .torrent + + if "used CloudFlare" in torrent_file: #Si tiene CloudFlare, usamos este proceso + try: + urllib.urlretrieve("http://anonymouse.org/cgi-bin/anon-www.cgi/" + url.strip(), + torrents_path + "/generictools.torrent") + torrent_file = open(torrents_path + "/generictools.torrent", "rb").read() + except: + torrent_file = "" + + torrent = decode(torrent_file) #decodificamos el .torrent + + #si sólo tiene un archivo, tomamos la longitud y la convertimos a una unidad legible, si no dará error + try: + sizet = torrent["info"]['length'] + size = convert_size(sizet) + except: + pass + + #si tiene múltiples archivos sumamos la longitud de todos + if not size: + check_video = scrapertools.find_multiple_matches(str(torrent["info"]["files"]), "'length': (\d+)}") + sizet = sum([int(i) for i in check_video]) + size = convert_size(sizet) + + except: + logger.error('ERROR al buscar el tamaño de un .Torrent: ' + url) + + try: + os.remove(torrents_path + "/generictools.torrent") #borramos el .torrent + except: + pass + + #logger.debug(url + ' / ' + size) + + return size + + def get_field_from_kodi_DB(item, from_fields='*', files='file'): logger.info() """ From ce3690f06b046435bf3af0b070d6ad02951184f2 Mon Sep 17 00:00:00 2001 From: Kingbox <37674310+lopezvg@users.noreply.github.com> Date: Wed, 12 Sep 2018 11:54:44 +0200 Subject: [PATCH 21/34] =?UTF-8?q?ZonaTorrent:=20canal=20redise=C3=B1ado?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Películas por géneros, calidades, idiomas y alfabeto - Series completas agrupadas por temporadas - Búsquedas - Videoteca de Series - Control de página inteligente, en función de los items y del tiempo de proceso Mejoras de títulos, con calidades --- plugin.video.alfa/channels/zonatorrent.json | 67 +- plugin.video.alfa/channels/zonatorrent.py | 1096 ++++++++++++++--- .../media/channels/thumb/zonatorrent.png | Bin 0 -> 36220 bytes 3 files changed, 948 insertions(+), 215 deletions(-) create mode 100644 plugin.video.alfa/resources/media/channels/thumb/zonatorrent.png diff --git a/plugin.video.alfa/channels/zonatorrent.json b/plugin.video.alfa/channels/zonatorrent.json index 31f31200..ce829a98 100644 --- a/plugin.video.alfa/channels/zonatorrent.json +++ b/plugin.video.alfa/channels/zonatorrent.json @@ -5,28 +5,59 @@ "adult": false, "language": ["cast", "lat"], "banner": "", - "thumbnail": "https://zonatorrent.org/wp-content/uploads/2017/04/zonatorrent-New-Logo.png", + "thumbnail": "zonatorrent.png", "version": 1, "categories": [ - "torrent", - "movie" + "torrent", + "movie", + "tvshow", + "vos" ], "settings": [ - { - "id": "include_in_global_search", - "type": "bool", - "label": "Incluir en busqueda global", - "default": true, - "enabled": true, - "visible": true - }, - { - "id": "modo_grafico", - "type": "bool", - "label": "Buscar información extra", - "default": true, - "enabled": true, - "visible": true + { + "id": "include_in_global_search", + "type": "bool", + "label": "Incluir en busqueda global", + "default": true, + "enabled": true, + "visible": true + }, + { + "id": "modo_grafico", + "type": "bool", + "label": "Buscar información extra", + "default": true, + "enabled": true, + "visible": true + }, + { + "id": "timeout_downloadpage", + "type": "list", + "label": "Timeout (segs.) en descarga de páginas o verificación de servidores", + "default": 5, + "enabled": true, + "visible": true, + "lvalues": [ + "None", + "1", + "2", + "3", + "4", + "5", + "6", + "7", + "8", + "9", + "10" + ] + }, + { + "id": "seleccionar_ult_temporadda_activa", + "type": "bool", + "label": "Seleccionar para Videoteca si estará activa solo la última Temporada", + "default": true, + "enabled": true, + "visible": true }, { "id": "include_in_newest_peliculas", diff --git a/plugin.video.alfa/channels/zonatorrent.py b/plugin.video.alfa/channels/zonatorrent.py index d3a85b28..6bee0e95 100644 --- a/plugin.video.alfa/channels/zonatorrent.py +++ b/plugin.video.alfa/channels/zonatorrent.py @@ -1,197 +1,899 @@ -# -*- coding: utf-8 -*- -# -*- Channel TioTorrent -*- -# -*- Created for Alfa-addon -*- -# -*- By the Alfa Develop Group -*- - -import re - -from channelselector import get_thumb -from core import httptools -from core import scrapertools -from core import servertools -from core import tmdb -from core.item import Item -from platformcode import logger - -__channel__ = "zonatorrent" - -HOST = 'https://zonatorrent.org' - -try: - __modo_grafico__ = config.get_setting('modo_grafico', __channel__) -except: - __modo_grafico__ = True - - -def mainlist(item): - logger.info() - - itemlist = list() - itemlist.append(Item(channel=item.channel, title="Últimas Películas", action="listado", url=HOST, page=False)) - itemlist.append(Item(channel=item.channel, title="Alfabético", action="alfabetico")) - itemlist.append(Item(channel=item.channel, title="Géneros", action="generos", url=HOST)) - itemlist.append(Item(channel=item.channel, title="Más vistas", action="listado", url=HOST + "/peliculas-mas-vistas/")) - itemlist.append(Item(channel=item.channel, title="Más votadas", action="listado", url=HOST + "/peliculas-mas-votadas/")) - itemlist.append(Item(channel=item.channel, title="Castellano", action="listado", url=HOST + "/?s=spanish", - page=True)) - itemlist.append(Item(channel=item.channel, title="Latino", action="listado", url=HOST + "/?s=latino", page=True)) - itemlist.append(Item(channel=item.channel, title="Subtitulado", action="listado", url=HOST + "/?s=Subtitulado", - page=True)) - itemlist.append(Item(channel=item.channel, title="Con Torrent", action="listado", url=HOST + "/?s=torrent", - page=True)) - itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=HOST + "/?s=", - page=False)) - - return itemlist - - -def alfabetico(item): - logger.info() - - itemlist = [] - - for letra in "#ABCDEFGHIJKLMNOPQRSTUVWXYZ": - itemlist.append(Item(channel=item.channel, action="listado", title=letra, page=True, - url=HOST + "/letters/%s/" % letra.replace("#", "0-9"))) - - return itemlist - - -def generos(item): - logger.info() - - itemlist = [] - - data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)", "", httptools.downloadpage(item.url).data) - data = scrapertools.find_single_match(data, '<a href="#">Generos</a><ulclass="sub-menu">(.*?)</ul>') - matches = scrapertools.find_multiple_matches(data, '<a href="([^"]+)">(.*?)</a>') - - for url, title in matches: - itemlist.append(Item(channel=item.channel, action="listado", title=title, url=url, page=True)) - - return itemlist - - -def search(item, texto): - logger.info() - item.url = item.url + texto.replace(" ", "+") - - try: - itemlist = listado(item) - except: - import sys - for line in sys.exc_info(): - logger.error("%s" % line) - return [] - - return itemlist - - -def listado(item): - logger.info() - - itemlist = [] - - data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)", "", httptools.downloadpage(item.url).data) - - pattern = '<a href="(?P<url>[^"]+)"><div[^>]+><figure[^>]+><img[^>]+src="(?P<thumb>[^"]+)"[^>]+></figure></div>' \ - '<h2 class="Title">(?P<title>.*?)</h2>.*?<span class="Time[^>]+>(?P<duration>.*?)</span><span ' \ - 'class="Date[^>]+>(?P<year>.*?)</span><span class="Qlty">(?P<quality>.*?)</span></p><div ' \ - 'class="Description"><p>.*?\:\s*(?P<plot>.*?)</p>' - matches = re.compile(pattern, re.DOTALL).findall(data) - - for url, thumb, title, duration, year, quality, plot in matches: - #title = title.strip().replace("Spanish Online Torrent", "").replace("Latino Online Torrent", "").replace(r'\d{4}','') - title = re.sub('Online|Spanish|Latino|Torrent|\d{4}','',title) - infoLabels = {"year": year} - - aux = scrapertools.find_single_match(duration, "(\d+)h\s*(\d+)m") - duration = "%s" % ((int(aux[0]) * 3600) + (int(aux[1]) * 60)) - infoLabels["duration"] = duration - - itemlist.append(Item(channel=item.channel, action="findvideos", title=title, url=url, thumbnail=thumb, - contentTitle=title, plot=plot, infoLabels=infoLabels)) - tmdb.set_infoLabels_itemlist(itemlist, __modo_grafico__) - if item.page: - pattern = "<span class='page-numbers current'>[^<]+</span><a class='page-numbers' href='([^']+)'" - url = scrapertools.find_single_match(data, pattern) - - itemlist.append(Item(channel=item.channel, action="listado", title=">> Página siguiente", url=url, page=True, - thumbnail=get_thumb("next.png"))) - - return itemlist - - -def findvideos(item): - logger.info() - - itemlist = [] - language = '' - quality = '' - data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)", "", httptools.downloadpage(item.url).data) - data = re.sub(r""", '"', data) - data = re.sub(r"<", '<', data) - - titles = re.compile('data-TPlayerNv="Opt\d+">.*? <span>(.*?)</span></li>', re.DOTALL).findall(data) - urls = re.compile('id="Opt\d+"><iframe[^>]+src="([^"]+)"', re.DOTALL).findall(data) - - if len(titles) == len(urls): - for i in range(0, len(titles)): - if i > 0: - logger.debug('titles: %s' % titles[i].strip()) - language, quality = titles[i].split(' - ') - title = "%s" % titles[i].strip() - else: - title = titles[0] - - if "goo.gl" in urls[i]: - urls[i] = httptools.downloadpage(urls[i], follow_redirects=False, only_headers=True)\ - .headers.get("location", "") - videourl = servertools.findvideos(urls[i]) - if len(videourl) > 0: - server = videourl[0][0].capitalize() - title = '%s %s' % (server, title) - itemlist.append(Item(channel=item.channel, action="play", title=title, url=videourl[0][1], - server=server, thumbnail=videourl[0][3], fulltitle=item.title, - language=language, quality=quality )) - - pattern = '<a[^>]+href="([^"]+)"[^<]+</a></td><td><span><img[^>]+>(.*?)</span></td><td><span><img[^>]+>(.*?)' \ - '</span></td><td><span>(.*?)</span>' - torrents = re.compile(pattern, re.DOTALL).findall(data) - - if len(torrents) > 0: - for url, text, lang, quality in torrents: - title = "%s %s - %s" % (text, lang, quality) - itemlist.append(Item(channel=item.channel, action="play", title=title, url=url, server="torrent", - fulltitle=item.title, thumbnail=get_thumb("channels_torrent.png"))) - - return itemlist - -def newest(categoria): - logger.info() - itemlist = [] - item = Item() - try: - if categoria == 'peliculas': - item.url = HOST - elif categoria == 'infantiles': - item.url = HOST + "/animacion" - elif categoria == 'terror': - item.url = HOST + "/terror/" - elif categoria == 'torrent': - item.url = HOST + "/?s=torrent" - else: - return [] - - itemlist = listado(item) - if itemlist[-1].title == ">> Página siguiente": - itemlist.pop() - - # Se captura la excepción, para no interrumpir al canal novedades si un canal falla - except: - import sys - for line in sys.exc_info(): - logger.error("{0}".format(line)) - return [] - - return itemlist +# -*- coding: utf-8 -*- + +import re +import sys +import urllib +import urlparse +import time + +from channelselector import get_thumb +from core import httptools +from core import scrapertools +from core import servertools +from core.item import Item +from platformcode import config, logger +from core import tmdb +from lib import generictools + + +host = 'https://zonatorrent.tv/' +channel = "zonatorrent" + +categoria = channel.capitalize() +__modo_grafico__ = config.get_setting('modo_grafico', channel) +modo_ultima_temp = config.get_setting('seleccionar_ult_temporadda_activa', channel) #Actualización sólo últ. Temporada? +timeout = config.get_setting('timeout_downloadpage', channel) + + +def mainlist(item): + logger.info() + itemlist = [] + + thumb_pelis = get_thumb("channels_movie.png") + thumb_series = get_thumb("channels_tvshow.png") + thumb_buscar = get_thumb("search.png") + thumb_separador = get_thumb("next.png") + + + itemlist.append(Item(channel=item.channel, title="Películas", action="submenu", url=host, thumbnail=thumb_pelis, extra="peliculas")) + + itemlist.append(Item(channel=item.channel, url=host, title="Series", action="submenu", thumbnail=thumb_series, extra="series")) + + itemlist.append(Item(channel=item.channel, title="Buscar...", action="search", url=host + "?s=", thumbnail=thumb_buscar, extra="search")) + + return itemlist + + +def submenu(item): + logger.info() + itemlist = [] + + thumb_cartelera = get_thumb("now_playing.png") + thumb_pelis_az = get_thumb("channels_movie_az.png") + thumb_pelis = get_thumb("channels_movie.png") + thumb_pelis_hd = get_thumb("channels_movie_hd.png") + thumb_pelis_vos = get_thumb("channels_vos.png") + thumb_popular = get_thumb("popular.png") + thumb_generos = get_thumb("genres.png") + thumb_spanish = get_thumb("channels_spanish.png") + thumb_latino = get_thumb("channels_latino.png") + thumb_torrent = get_thumb("channels_torrent.png") + thumb_series = get_thumb("channels_tvshow.png") + thumb_series_az = get_thumb("channels_tvshow_az.png") + + + if item.extra != "series": + item.url_plus = "movies" + itemlist.append(item.clone(title="Últimas Películas", action="listado", url=host + "estrenos-de-cine-2", url_plus=item.url_plus, thumbnail=thumb_cartelera)) + itemlist.append(item.clone(title="Alfabético", action="alfabeto", url=host + "letters/%s", thumbnail=thumb_pelis_az, extra2 = 'alfabeto')) + itemlist.append(item.clone(title="Géneros", action="categorias", url=host + item.url_plus, url_plus=item.url_plus, extra2= "generos", thumbnail=thumb_generos)) + itemlist.append(item.clone(title="Calidades", action="categorias", url=host + item.url_plus, url_plus=item.url_plus, extra2= "calidades", thumbnail=thumb_pelis_hd)) + itemlist.append(item.clone(title="Más vistas", action="listado", url=host + "/peliculas-mas-vistas-2/", url_plus=item.url_plus, thumbnail=thumb_popular, extra2="popular")) + itemlist.append(item.clone(title="Más votadas", action="listado", url=host + "/peliculas-mas-votadas/", url_plus=item.url_plus, thumbnail=thumb_popular, extra2="popular")) + itemlist.append(item.clone(title="Castellano", action="listado", url=host + "?s=spanish", url_plus=item.url_plus, thumbnail=thumb_spanish, extra2="CAST")) + itemlist.append(item.clone(title="Latino", action="listado", url=host + "?s=latino", url_plus=item.url_plus, thumbnail=thumb_latino, lextra2="LAT")) + itemlist.append(item.clone(title="Subtitulado", action="listado", url=host + "?s=Subtitulado", url_plus=item.url_plus, thumbnail=thumb_pelis_vos, extra2="VOSE")) + + else: + item.url_plus = "serie-tv" + itemlist.append(item.clone(title="Series completas", action="listado", url=item.url + item.url_plus, url_plus=item.url_plus, thumbnail=thumb_series, extra="series")) + itemlist.append(item.clone(title="Alfabético A-Z", action="alfabeto", url=item.url + "letters/%s", url_plus=item.url_plus, thumbnail=thumb_series_az, extra="series", extra2 = 'alfabeto')) + itemlist.append(item.clone(title="Más vistas", action="listado", url=host + "/peliculas-mas-vistas-2/", url_plus=item.url_plus, thumbnail=thumb_popular, extra2="popular")) + itemlist.append(item.clone(title="Más votadas", action="listado", url=host + "/peliculas-mas-votadas/", url_plus=item.url_plus, thumbnail=thumb_popular, extra2="popular")) + itemlist.append(item.clone(title="Castellano", action="listado", url=host + "?s=spanish", url_plus=item.url_plus, thumbnail=thumb_spanish, extra2="CAST")) + itemlist.append(item.clone(title="Latino", action="listado", url=host + "?s=latino", url_plus=item.url_plus, thumbnail=thumb_latino, extra2="LAT")) + itemlist.append(item.clone(title="Subtitulado", action="listado", url=host + "?s=Subtitulado", url_plus=item.url_plus, thumbnail=thumb_pelis_vos, extra2="VOSE")) + + return itemlist + + +def categorias(item): + logger.info() + + itemlist = [] + + data = '' + try: + data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)", "", httptools.downloadpage(item.url, timeout=timeout).data) + data = unicode(data, "utf-8", errors="replace").encode("utf-8") + except: + pass + + patron = '<div id="categories-2" class="Wdgt widget_categories"><div class="Title widget-title">Categorías</div><ul>(.*?)<\/ul><\/div>' + #Verificamos si se ha cargado una página, y si además tiene la estructura correcta + if not data or not scrapertools.find_single_match(data, patron): + item = generictools.web_intervenida(item, data) #Verificamos que no haya sido clausurada + if item.intervencion: #Sí ha sido clausurada judicialmente + for clone_inter, autoridad in item.intervencion: + thumb_intervenido = get_thumb(autoridad) + itemlist.append(item.clone(action='', title="[COLOR yellow]" + clone_inter.capitalize() + ': [/COLOR]' + intervenido_judicial + '. Reportar el problema en el foro', thumbnail=thumb_intervenido)) + return itemlist #Salimos + + logger.error("ERROR 01: SUBMENU: La Web no responde o ha cambiado de URL: " + item.url + data) + if not data: #Si no ha logrado encontrar nada, salimos + itemlist.append(item.clone(action='', title=item.category + ': ERROR 01: SUBMENU: La Web no responde o ha cambiado de URL. Si la Web está activa, reportar el error con el log')) + return itemlist #si no hay más datos, algo no funciona, pintamos lo que tenemos + + data = scrapertools.find_single_match(data, patron) + patron = '<li class="[^>]+><a href="([^"]+)"\s?(?:title="[^"]+")?>(.*?)<\/a><\/li>' + matches = re.compile(patron, re.DOTALL).findall(data) + + if not matches: + logger.error("ERROR 02: SUBMENU: Ha cambiado la estructura de la Web " + " / PATRON: " + patron + " / DATA: " + data) + itemlist.append(item.clone(action='', title=item.category + ': ERROR 02: SUBMENU: Ha cambiado la estructura de la Web. Reportar el error con el log')) + return itemlist #si no hay más datos, algo no funciona, pintamos lo que tenemos + + #logger.debug(item.url_plus) + #logger.debug(matches) + + for scrapedurl, scrapedtitle in matches: + + #Preguntamos por las entradas que corresponden al "extra2" + if item.extra2 == 'calidades': + if scrapedtitle.lower() in ['dvd full', 'tshq', 'bdrip', 'dvdscreener', 'brscreener r6', 'brscreener', 'webscreener', 'dvd', 'hdrip', 'screener', 'screeer', 'webrip', 'brrip', 'dvb', 'dvdrip', 'dvdsc', 'dvdsc - r6', 'hdts', 'hdtv', 'kvcd', 'line', 'ppv', 'telesync', 'ts hq', 'ts hq proper', '480p', '720p', 'ac3', 'bluray', 'camrip', 'ddc', 'hdtv - screener', 'tc screener', 'ts screener', 'ts screener alto', 'ts screener medio', 'vhs screener']: + itemlist.append(item.clone(action="listado", title=scrapedtitle.capitalize().strip(), url=scrapedurl)) + + else: + if scrapedtitle.lower() not in ['estrenos de cine', 'serie tv', 'dvd full', 'tshq', 'bdrip', 'dvdscreener', 'brscreener r6', 'brscreener', 'webscreener', 'dvd', 'hdrip', 'screener', 'screeer', 'webrip', 'brrip', 'dvb', 'dvdrip', 'dvdsc', 'dvdsc - r6', 'hdts', 'hdtv', 'kvcd', 'line', 'ppv', 'telesync', 'ts hq', 'ts hq proper', '480p', '720p', 'ac3', 'bluray', 'camrip', 'ddc', 'hdtv - screener', 'tc screener', 'ts screener', 'ts screener alto', 'ts screener medio', 'vhs screener']: + itemlist.append(item.clone(action="listado", title=scrapedtitle.capitalize().strip(), url=scrapedurl)) + + return itemlist + + +def alfabeto(item): + logger.info() + itemlist = [] + + itemlist.append(item.clone(action="listado", title="0-9", url=item.url % "0-9")) + + for letra in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']: + itemlist.append(item.clone(action="listado", title=letra, url=item.url % letra.lower())) + + return itemlist + + +def listado(item): + logger.info() + itemlist = [] + item.category = categoria + + #logger.debug(item) + + curr_page = 1 # Página inicial + last_page = 99999 # Última página inicial + if item.curr_page: + curr_page = int(item.curr_page) # Si viene de una pasada anterior, lo usamos + del item.curr_page # ... y lo borramos + if item.last_page: + last_page = int(item.last_page) # Si viene de una pasada anterior, lo usamos + del item.last_page # ... y lo borramos + + cnt_tot = 40 # Poner el num. máximo de items por página + cnt_title = 0 # Contador de líneas insertadas en Itemlist + inicio = time.time() # Controlaremos que el proceso no exceda de un tiempo razonable + fin = inicio + 10 # Después de este tiempo pintamos (segundos) + timeout_search = timeout # Timeout para descargas + if item.extra == 'search': + timeout_search = timeout * 2 # Timeout un poco más largo para las búsquedas + if timeout_search < 5: + timeout_search = 5 # Timeout un poco más largo para las búsquedas + + #Sistema de paginado para evitar páginas vacías o semi-vacías en casos de búsquedas con series con muchos episodios + title_lista = [] # Guarda la lista de series que ya están en Itemlist, para no duplicar lineas + if item.title_lista: # Si viene de una pasada anterior, la lista ya estará guardada + title_lista.extend(item.title_lista) # Se usa la lista de páginas anteriores en Item + del item.title_lista # ... limpiamos + + if not item.extra2: # Si viene de Catálogo o de Alfabeto + item.extra2 = '' + + next_page_url = item.url + #Máximo num. de líneas permitidas por TMDB. Máx de 10 segundos por Itemlist para no degradar el rendimiento + while cnt_title <= cnt_tot * 0.45 and curr_page <= last_page and fin > time.time(): + + # Descarga la página + data = '' + try: + data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)| ", "", httptools.downloadpage(next_page_url, timeout=timeout_search).data) + data = unicode(data, "utf-8", errors="replace").encode("utf-8") + except: + pass + + if not data: #Si la web está caída salimos sin dar error + logger.error("ERROR 01: LISTADO: La Web no responde o ha cambiado de URL: " + item.url + " / DATA: " + data) + itemlist.append(item.clone(action='', title=item.channel.capitalize() + ': ERROR 01: LISTADO:. La Web no responde o ha cambiado de URL. Si la Web está activa, reportar el error con el log')) + break #si no hay más datos, algo no funciona, pintamos lo que tenemos + + #Patrón para todo, menos para Alfabeto + patron = '<li class="TPostMv"><article id="[^"]+" class="[^"]+"><a href="(?P<url>[^"]+)".*?><div[^>]+><figure[^>]+><img[^>]+src="(?P<thumb>[^"]+)"[^>]+><\/figure>(?:<span class="TpTv BgA">(.*?)<\/span>)?<\/div><h2 class="Title">(?P<title>.*?)<\/h2>.*?<span class="Time[^>]+>(?P<duration>.*?)<\/span><span class="Date[^>]+>(?P<year>.*?)<\/span>(?:<span class="Qlty">(?P<quality>.*?)<\/span>)?<\/p><div class="Description">.*?<\/div><\/div><\/article><\/li>' + + #Si viene de Alfabeto, ponemos un patrón especializado + if item.extra2 == 'alfabeto': + patron = '<td class="MvTbImg"><a href="(?P<url>[^"]+)".*?src="(?P<thumb>[^"]+)"[^>]+>(?:<span class="TpTv BgA">(.*?)<\/span>)?<\/a><\/td>[^>]+>[^>]+><strong>(?P<title>.*?)<\/strong><\/a><\/td><td>(?P<year>.*?)<\/td><td><p class="Info"><span class="Qlty">(?P<quality>.*?)<\/span><\/p><\/td><td>(?P<duration>.*?)<\/td>' + + matches = re.compile(patron, re.DOTALL).findall(data) + if not matches and not 'Lo sentimos, no tenemos nada que mostrar' in data: #error + item = generictools.web_intervenida(item, data) #Verificamos que no haya sido clausurada + if item.intervencion: #Sí ha sido clausurada judicialmente + item, itemlist = generictools.post_tmdb_episodios(item, itemlist) #Llamamos al método para el pintado del error + return itemlist #Salimos + + logger.error("ERROR 02: LISTADO: Ha cambiado la estructura de la Web " + " / PATRON: " + patron + " / DATA: " + data) + itemlist.append(item.clone(action='', title=item.channel.capitalize() + ': ERROR 02: LISTADO: Ha cambiado la estructura de la Web. Reportar el error con el log')) + break #si no hay más datos, algo no funciona, pintamos lo que tenemos + + #logger.debug("PATRON: " + patron) + #logger.debug(matches) + #logger.debug(data) + + #Buscamos la url de paginado y la última página + if item.extra2 == 'alfabeto': #patrón especial + patron = "<div class='wp-pagenavi'><span class='pages'>Pagina \d+ of (\d+)<\/span><span class='current'>(\d+)<\/span>" + patron += '<a class="page larger" title="[^"]+" href="([^"]+)">' + else: + patron = '<div class="tr-pagnav wp-pagenavi">' + patron += "<span aria-current='page' class='page-numbers current'>(\d+)<\/span>.*?<a class='page-numbers' href='[^+]+'>(\d+)<\/a>" + patron += '<a class="next page-numbers" href="([^"]+)">Siguiente' + + if last_page == 99999: #Si es el valor inicial, buscamos + try: + if item.extra2 == 'alfabeto': #patrón especial + last_page, curr_page, next_page_url = scrapertools.find_single_match(data, patron) + else: + curr_page, last_page, next_page_url = scrapertools.find_single_match(data, patron) + curr_page = int(curr_page) + last_page = int(last_page) + except: #Si no lo encuentra, lo ponemos a 1 + #logger.error('ERROR 03: LISTADO: Al obtener la paginación: ' + patron) + curr_page = 1 + last_page = 0 + next_page_url = item.url + '/page/1' + #logger.debug('curr_page: ' + str(curr_page) + ' / last_page: ' + str(last_page) + ' / url: ' + next_page_url) + if last_page > 1: + curr_page += 1 #Apunto ya a la página siguiente + next_page_url = re.sub(r'\/page\/\d+', '/page/%s' % curr_page, next_page_url) + + #Empezamos el procesado de matches + for scrapedurl, scrapedthumb, scrapedtype, scrapedtitle, scrapedduration, scrapedyear, scrapedquality in matches: + if item.extra2 == 'alfabeto': #Cambia el orden de tres parámetros + duration = scrapedquality + year = scrapedduration + quality = scrapedyear + else: #lo estándar + duration = scrapedduration + year = scrapedyear + quality = scrapedquality + + #estandarizamos la duración + if 'h' not in duration: + duration = '0:' + duration.replace('m', '') + else: + duration = duration.replace('h ', ':').replace('m', '') + duration = re.sub(r',.*?\]', ']', duration) + if '0:0' in duration or ',' in duration: + duration = '' + else: + try: + hora, minuto = duration.split(':') + duration = '%s:%s h' % (str(hora).zfill(2), str(minuto).zfill(2)) + except: + duration = '' + + #Algunos enlaces no filtran tipos, lo hago aquí + if item.extra2 in ['alfabeto', 'CAST', 'LAT', 'VOSE', 'popular'] or item.category_new == 'newest': + if item.extra == 'peliculas' and 'tv' in scrapedtype.lower(): + continue + elif item.extra == 'series' and not 'tv' in scrapedtype.lower(): + continue + + title = scrapedtitle + title = title.replace("á", "a").replace("é", "e").replace("í", "i").replace("ó", "o").replace("ú", "u").replace("ü", "u").replace("�", "ñ").replace("ñ", "ñ").replace("ã", "a").replace("&etilde;", "e").replace("ĩ", "i").replace("õ", "o").replace("ũ", "u").replace("ñ", "ñ").replace("’", "'") + + cnt_title += 1 + + item_local = item.clone() #Creamos copia de Item para trabajar + if item_local.tipo: #... y limpiamos + del item_local.tipo + if item_local.totalItems: + del item_local.totalItems + if item_local.post_num: + del item_local.post_num + if item_local.intervencion: + del item_local.intervencion + if item_local.viewmode: + del item_local.viewmode + item_local.text_bold = True + del item_local.text_bold + item_local.text_color = True + del item_local.text_color + if item_local.url_plus: + del item_local.url_plus + + title_subs = [] #creamos una lista para guardar info importante + item_local.language = [] #iniciamos Lenguaje + item_local.quality = quality #guardamos la calidad, si la hay + item_local.url = scrapedurl #guardamos el thumb + item_local.thumbnail = scrapedthumb #guardamos el thumb + item_local.context = "['buscar_trailer']" + + item_local.contentType = "movie" #por defecto, son películas + item_local.action = "findvideos" + + #Analizamos los formatos de series + if '-serie-tv-' in scrapedurl or item_local.extra == 'series' or 'tv' in scrapedtype.lower(): + item_local.contentType = "tvshow" + item_local.action = "episodios" + item_local.season_colapse = True #Muestra las series agrupadas por temporadas + + #Buscamos calidades adicionales + if "3d" in title.lower() and not "3d" in item_local.quality.lower(): + if item_local.quality: + item_local.quality += " 3D" + else: + item_local.quality = "3D" + title = re.sub('3D', '', title, flags=re.IGNORECASE) + title = title.replace('[]', '') + if item_local.quality: + item_local.quality += ' %s' % scrapertools.find_single_match(title, '\[(.*?)\]') + else: + item_local.quality = '%s' % scrapertools.find_single_match(title, '\[(.*?)\]') + + #Detectamos idiomas + if 'LAT' in item.extra2: + item_local.language += ['LAT'] + elif 'VOSE' in item.extra2: + item_local.language += ['VOSE'] + if item_local.extra2: del item_local.extra2 + + if ("latino" in scrapedurl.lower() or "latino" in title.lower()) and "LAT" not in item_local.language: + item_local.language += ['LAT'] + elif ('subtitulado' in scrapedurl.lower() or 'subtitulado' in title.lower() or 'vose' in title.lower()) and "VOSE" not in item_local.language: + item_local.language += ['VOSE'] + elif ('version-original' in scrapedurl.lower() or 'version original' in title.lower() or 'vo' in title.lower()) and "VO" not in item_local.language: + item_local.language += ['VO'] + + if item_local.language == []: + item_local.language = ['CAST'] + + #Detectamos info interesante a guardar para después de TMDB + if scrapertools.find_single_match(title, '[m|M].*?serie'): + title = re.sub(r'[m|M]iniserie', '', title) + title_subs += ["Miniserie"] + if scrapertools.find_single_match(title, '[s|S]aga'): + title = re.sub(r'[s|S]aga', '', title) + title_subs += ["Saga"] + if scrapertools.find_single_match(title, '[c|C]olecc'): + title = re.sub(r'[c|C]olecc...', '', title) + title_subs += ["Colección"] + + if "duolog" in title.lower(): + title_subs += ["[Saga]"] + title = title.replace(" Duologia", "").replace(" duologia", "").replace(" Duolog", "").replace(" duolog", "") + if "trilog" in title.lower(): + title_subs += ["[Saga]"] + title = title.replace(" Trilogia", "").replace(" trilogia", "").replace(" Trilog", "").replace(" trilog", "") + if "extendida" in title.lower() or "v.e." in title.lower()or "v e " in title.lower(): + title_subs += ["[V. Extendida]"] + title = title.replace("Version Extendida", "").replace("(Version Extendida)", "").replace("V. Extendida", "").replace("VExtendida", "").replace("V Extendida", "").replace("V.Extendida", "").replace("V Extendida", "").replace("V.E.", "").replace("V E ", "").replace("V:Extendida", "") + + #Analizamos el año. Si no está claro ponemos '-' + try: + yeat_int = int(year) + if yeat_int >= 1970 and yeat_int <= 2040: + item_local.infoLabels["year"] = yeat_int + else: + item_local.infoLabels["year"] = '-' + except: + item_local.infoLabels["year"] = '-' + + #Empezamos a limpiar el título en varias pasadas + patron = '\s?-?\s?(line)?\s?-\s?$' + regex = re.compile(patron, re.I) + title = regex.sub("", title) + title = re.sub(r'\(\d{4}\s*?\)', '', title) + title = re.sub(r'\[\d{4}\s*?\]', '', title) + title = re.sub(r'[s|S]erie', '', title) + title = re.sub(r'- $', '', title) + + #Limpiamos el título de la basura innecesaria + title = re.sub(r'TV|Online|Spanish|Torrent|en Espa\xc3\xb1ol|Español|Latino|Subtitulado|Blurayrip|Bluray rip|\[.*?\]|R2 Pal|\xe3\x80\x90 Descargar Torrent \xe3\x80\x91|Completa|Temporada|Descargar|Torren', '', title, flags=re.IGNORECASE) + + title = title.replace("Dual", "").replace("dual", "").replace("Subtitulada", "").replace("subtitulada", "").replace("Subt", "").replace("subt", "").replace("(Proper)", "").replace("(proper)", "").replace("Proper", "").replace("proper", "").replace("#", "").replace("(Latino)", "").replace("Latino", "").replace("LATINO", "").replace("Spanish", "").replace("Trailer", "").replace("Audio", "") + title = title.replace("HDTV-Screener", "").replace("DVDSCR", "").replace("TS ALTA", "").replace("- HDRip", "").replace("(HDRip)", "").replace("- Hdrip", "").replace("(microHD)", "").replace("(DVDRip)", "").replace("HDRip", "").replace("(BR-LINE)", "").replace("(HDTS-SCREENER)", "").replace("(BDRip)", "").replace("(BR-Screener)", "").replace("(DVDScreener)", "").replace("TS-Screener", "").replace(" TS", "").replace(" Ts", "").replace(" 480p", "").replace(" 480P", "").replace(" 720p", "").replace(" 720P", "").replace(" 1080p", "").replace(" 1080P", "").replace("DVDRip", "").replace(" Dvd", "").replace(" DVD", "").replace(" V.O", "").replace(" Unrated", "").replace(" UNRATED", "").replace(" unrated", "").replace("screener", "").replace("TS-SCREENER", "").replace("TSScreener", "").replace("HQ", "").replace("AC3 5.1", "").replace("Telesync", "").replace("Line Dubbed", "").replace("line Dubbed", "").replace("LineDuB", "").replace("Line", "").replace("XviD", "").replace("xvid", "").replace("XVID", "").replace("Mic Dubbed", "").replace("HD", "").replace("V2", "").replace("CAM", "").replace("VHS.SCR", "").replace("Dvd5", "").replace("DVD5", "").replace("Iso", "").replace("ISO", "").replace("Reparado", "").replace("reparado", "").replace("DVD9", "").replace("Dvd9", "") + + #Terminamos de limpiar el título + title = re.sub(r'\??\s?\d*?\&.*', '', title) + title = re.sub(r'[\(|\[]\s+[\)|\]]', '', title) + title = title.replace('()', '').replace('[]', '').strip().lower().title() + + #Limpiamos el año del título, siempre que no sea todo el título o una cifra de más dígitos + if not scrapertools.find_single_match(title, '\d{5}'): + title_alt = title + title_alt = re.sub(r'[\[|\(]?\d{4}[\)|\]]?', '', title_alt).strip() + if title_alt: + title = title_alt + + item_local.from_title = title.strip().lower().title() #Guardamos esta etiqueta para posible desambiguación de título + + #Salvamos el título según el tipo de contenido + if item_local.contentType == "movie": + item_local.contentTitle = title.strip().lower().title() + else: + item_local.contentSerieName = title.strip().lower().title() + + item_local.title = title.strip().lower().title() + + #Añadimos la duración a la Calidad + if duration: + if item_local.quality: + item_local.quality += ' [%s]' % duration + else: + item_local.quality = '[%s]' % duration + + #Guarda la variable temporal que almacena la info adicional del título a ser restaurada después de TMDB + item_local.title_subs = title_subs + + itemlist.append(item_local.clone()) #Pintar pantalla + + #logger.debug(item_local) + + #Pasamos a TMDB la lista completa Itemlist + tmdb.set_infoLabels(itemlist, __modo_grafico__) + + #Llamamos al método para el maquillaje de los títulos obtenidos desde TMDB + item, itemlist = generictools.post_tmdb_listado(item, itemlist) + + # Si es necesario añadir paginacion + if curr_page <= last_page: + if last_page > 1: + title = '%s de %s' % (curr_page-1, last_page) + else: + title = '%s' % curr_page-1 + + itemlist.append(Item(channel=item.channel, action="listado", title=">> Página siguiente " + title, title_lista=title_lista, url=next_page_url, extra=item.extra, extra2=item.extra2, last_page=str(last_page), curr_page=str(curr_page))) + + return itemlist + + +def findvideos(item): + logger.info() + itemlist = [] + matches = [] + item.category = categoria + + item.extra2 = 'xyz' + del item.extra2 + + #logger.debug(item) + + #Bajamos los datos de la página + data = '' + patron = '<a[^>]+href="([^"]+)"[^<]+</a></td><td><span><img[^>]+>(.*?)</span></td><td><span><img[^>]+>(.*?)</span></td><td><span>(.*?)</span>' + try: + data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)", "", httptools.downloadpage(item.url, timeout=timeout).data) + data = unicode(data, "utf-8", errors="replace").encode("utf-8") + data = re.sub(r""", '"', data) + data = re.sub(r"<", '<', data) + except: + pass + + if not data: + logger.error("ERROR 01: FINDVIDEOS: La Web no responde o la URL es erronea: " + item.url) + itemlist.append(item.clone(action='', title=item.channel.capitalize() + ': ERROR 01: FINDVIDEOS:. La Web no responde o la URL es erronea. Si la Web está activa, reportar el error con el log')) + return itemlist #si no hay más datos, algo no funciona, pintamos lo que tenemos + + matches = re.compile(patron, re.DOTALL).findall(data) + if not matches and not scrapertools.find_single_match(data, 'data-TPlayerNv="Opt\d+">.*? <span>(.*?)</span></li>'): #error + logger.error("ERROR 02: FINDVIDEOS: No hay enlaces o ha cambiado la estructura de la Web " + " / PATRON: " + patron + data) + itemlist.append(item.clone(action='', title=item.channel.capitalize() + ': ERROR 02: FINDVIDEOS: No hay enlaces o ha cambiado la estructura de la Web. Verificar en la Web esto último y reportar el error con el log')) + return itemlist #si no hay más datos, algo no funciona, pintamos lo que tenemos + + #logger.debug("PATRON: " + patron) + #logger.debug(matches) + #logger.debug(data) + + #Llamamos al método para crear el título general del vídeo, con toda la información obtenida de TMDB + item, itemlist = generictools.post_tmdb_findvideos(item, itemlist) + + #Ahora tratamos los enlaces .torrent + for scrapedurl, scrapedserver, language, quality in matches: #leemos los torrents con la diferentes calidades + #Generamos una copia de Item para trabajar sobre ella + item_local = item.clone() + + if 'torrent' not in scrapedserver.lower(): #Si es un servidor Directo, lo dejamos para luego + continue + + item_local.url = scrapedurl + if '.io/' in item_local.url: + item_local.url = re.sub(r'http.?:\/\/\w+\.\w+\/', host, item_local.url) #Aseguramos el dominio del canal + + #Detectamos idiomas + if ("latino" in scrapedurl.lower() or "latino" in language.lower()) and "LAT" not in item_local.language: + item_local.language += ['LAT'] + elif ('subtitulado' in scrapedurl.lower() or 'subtitulado' in language.lower() or 'vose' in language.lower()) and "VOSE" not in item_local.language: + item_local.language += ['VOSE'] + elif ('version-original' in scrapedurl.lower() or 'version original' in language.lower() or 'vo' in language.lower()) and "VO" not in item_local.language: + item_local.language += ['VO'] + + if item_local.language == []: + item_local.language = ['CAST'] + + #Añadimos la calidad y copiamos la duración + item_local.quality = quality + if scrapertools.find_single_match(item.quality, '(\[\d+:\d+\ h])'): + item_local.quality += ' [/COLOR][COLOR white]%s' % scrapertools.find_single_match(item.quality, '(\[\d+:\d+\ h])') + + #Buscamos si ya tiene tamaño, si no, los buscamos en el archivo .torrent + size = scrapertools.find_single_match(item_local.quality, '\s\[(\d+,?\d*?\s\w\s?[b|B])\]') + if not size: + size = generictools.get_torrent_size(item_local.url) #Buscamos el tamaño en el .torrent + if size: + item_local.title = re.sub(r'\s\[\d+,?\d*?\s\w[b|B]\]', '', item_local.title) #Quitamos size de título, si lo traía + item_local.title = '%s [%s]' % (item_local.title, size) #Agregamos size al final del título + size = size.replace('GB', 'G B').replace('Gb', 'G b').replace('MB', 'M B').replace('Mb', 'M b') + item_local.quality = re.sub(r'\s\[\d+,?\d*?\s\w\s?[b|B]\]', '', item_local.quality) #Quitamos size de calidad, si lo traía + item_local.quality = '%s [%s]' % (item_local.quality, size) #Agregamos size al final de la calidad + + #Ahora pintamos el link del Torrent + item_local.title = '[COLOR yellow][?][/COLOR] [COLOR yellow][Torrent][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (item_local.quality, str(item_local.language)) + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.quality) + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.quality).strip() + item_local.quality = item_local.quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + + item_local.alive = "??" #Calidad del link sin verificar + item_local.action = "play" #Visualizar vídeo + item_local.server = "torrent" #Servidor Torrent + + itemlist.append(item_local.clone()) #Pintar pantalla + + #logger.debug("TORRENT: " + scrapedurl + " / title gen/torr: " + item.title + " / " + item_local.title + " / calidad: " + item_local.quality + " / content: " + item_local.contentTitle + " / " + item_local.contentSerieName) + #logger.debug(item_local) + + #Ahora tratamos los Servidores Directos + titles = re.compile('data-TPlayerNv="Opt\d+">.*? <span>(.*?)</span></li>', re.DOTALL).findall(data) + urls = re.compile('id="Opt\d+"><iframe[^>]+src="([^"]+)"', re.DOTALL).findall(data) + + #Recorremos la lista de servidores Directos, excluyendo YouTube para trailers + if len(titles) == len(urls): + for i in range(0, len(titles)): + #Generamos una copia de Item para trabajar sobre ella + item_local = item.clone() + + if i > 0: + #logger.debug('titles: %s' % titles[i].strip()) + language, quality = titles[i].split(' - ') + title = "%s" % titles[i].strip() + else: + title = titles[0] + + if "goo.gl" in urls[i]: + urls[i] = httptools.downloadpage(urls[i], follow_redirects=False, only_headers=True)\ + .headers.get("location", "") + + videourl = servertools.findvideos(urls[i]) #Buscamos la url del vídeo + + #Ya tenemos un enlace, lo pintamos + if len(videourl) > 0: + server = videourl[0][0] + enlace = videourl[0][1] + mostrar_server = True + if config.get_setting("hidepremium"): #Si no se aceptan servidore premium, se ignoran + mostrar_server = servertools.is_server_enabled(server) + if mostrar_server: + item_local.alive = "??" #Se asume poe defecto que es link es dudoso + if server.lower() == 'youtube': #Pasamos de YouTube, usamos Trailers de Alfa + continue + if server.lower() != 'netutv': #Este servidor no se puede comprobar + #Llama a la subfunción de check_list_links(itemlist) para cada link de servidor + item_local.alive = servertools.check_video_link(enlace, server, timeout=timeout) + if '?' in item_local.alive: + alive = '?' #No se ha podido comprobar el vídeo + elif 'no' in item_local.alive.lower(): + continue #El enlace es malo + else: + alive = '' #El enlace está verificado + + #Detectamos idiomas + item_local.language = [] + if "latino" in language.lower() and "LAT" not in item_local.language: + item_local.language += ['LAT'] + elif ('subtitulado' in language.lower() or 'vose' in language.lower()) and "VOSE" not in item_local.language: + item_local.language += ['VOSE'] + elif ('version original' in language.lower() or 'vo' in language.lower()) and "VO" not in item_local.language: + item_local.language += ['VO'] + + if item_local.language == []: + item_local.language = ['CAST'] + + #Ahora pintamos el link del Servidor Directo + item_local.url = enlace + item_local.quality = quality #Añadimos la calidad + if scrapertools.find_single_match(item.quality, '(\[\d+:\d+\ h])'): #Añadimos la duración + item_local.quality += ' [/COLOR][COLOR white]%s' % scrapertools.find_single_match(item.quality, '(\[\d+:\d+\ h])') + item_local.title = '[COLOR yellow][%s][/COLOR] [COLOR yellow][%s][/COLOR] [COLOR limegreen][%s][/COLOR] [COLOR red]%s[/COLOR]' % (alive, server.capitalize(), item_local.quality, str(item_local.language)) + + #Preparamos título y calidad, quitamos etiquetas vacías + item_local.title = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.title) + item_local.title = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.title) + item_local.title = item_local.title.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\[\[?\s?\]?\]\[\/COLOR\]', '', item_local.quality) + item_local.quality = re.sub(r'\s?\[COLOR \w+\]\s?\[\/COLOR\]', '', item_local.quality) + item_local.quality = item_local.quality.replace("--", "").replace("[]", "").replace("()", "").replace("(/)", "").replace("[/]", "").strip() + + item_local.action = "play" #Visualizar vídeo + item_local.server = server #Servidor Directo + + itemlist.append(item_local.clone()) #Pintar pantalla + + #logger.debug("DIRECTO: " server + ' / ' + enlace + " / title: " + item.title + " / " + item_local.title + " / calidad: " + item_local.quality + " / content: " + item_local.contentTitle + " / " + item_local.contentSerieName) + #logger.debug(item_local) + + return itemlist + + +def episodios(item): + logger.info() + itemlist = [] + item.category = categoria + + #logger.debug(item) + + if item.from_title: + item.title = item.from_title + item.extra2 = 'xyz' + del item.extra2 + + item.quality = re.sub(r'\s?\[\d+:\d+\ h]', '', item.quality) #quitamos la duración de la serie + + #Limpiamos num. Temporada y Episodio que ha podido quedar por Novedades + season_display = 0 + if item.contentSeason: + if item.season_colapse: #Si viene del menú de Temporadas... + season_display = item.contentSeason #... salvamos el num de sesión a pintar + item.from_num_season_colapse = season_display + del item.season_colapse + item.contentType = "tvshow" + if item.from_title_season_colapse: + item.title = item.from_title_season_colapse + del item.from_title_season_colapse + if item.infoLabels['title']: + del item.infoLabels['title'] + del item.infoLabels['season'] + if item.contentEpisodeNumber: + del item.infoLabels['episode'] + if season_display == 0 and item.from_num_season_colapse: + season_display = item.from_num_season_colapse + + # Obtener la información actualizada de la Serie. TMDB es imprescindible para Videoteca + if not item.infoLabels['tmdb_id']: + tmdb.set_infoLabels(item, True) + + modo_ultima_temp_alt = modo_ultima_temp + if item.ow_force == "1": #Si hay un traspaso de canal o url, se actualiza todo + modo_ultima_temp_alt = False + + max_temp = 1 + if item.infoLabels['number_of_seasons']: + max_temp = item.infoLabels['number_of_seasons'] + y = [] + if modo_ultima_temp_alt and item.library_playcounts: #Averiguar cuantas temporadas hay en Videoteca + patron = 'season (\d+)' + matches = re.compile(patron, re.DOTALL).findall(str(item.library_playcounts)) + for x in matches: + y += [int(x)] + max_temp = max(y) + + # Descarga la página + data = '' #Inserto en num de página en la url + try: + data = re.sub(r"\n|\r|\t|\s{2}|(<!--.*?-->)| ", "", httptools.downloadpage(item.url, timeout=timeout).data) + data = unicode(data, "utf-8", errors="replace").encode("utf-8") + data = re.sub(r""", '"', data) + data = re.sub(r"<", '<', data) + data = re.sub(r">", '>', data) + except: #Algún error de proceso, salimos + pass + + if not data: + logger.error("ERROR 01: EPISODIOS: La Web no responde o la URL es erronea" + item.url) + itemlist.append(item.clone(action='', title=item.channel.capitalize() + ': ERROR 01: EPISODIOS:. La Web no responde o la URL es erronea. Si la Web está activa, reportar el error con el log')) + return itemlist + + #Buscamos los episodios + patron = '<tr><td><span class="Num">\d+<\/span><\/td><td class="MvTbImg B"><a href="([^"]+)" class="MvTbImg">(?:<span class="[^>]+>)?<img src="([^"]+)" alt="([^"]+)">(?:<\/span>)?<\/a><\/td><td class="MvTbTtl">[^>]+>(.*?)<\/a>' + matches = re.compile(patron, re.DOTALL).findall(data) + if not matches: #error + item = generictools.web_intervenida(item, data) #Verificamos que no haya sido clausurada + if item.intervencion: #Sí ha sido clausurada judicialmente + item, itemlist = generictools.post_tmdb_episodios(item, itemlist) #Llamamos al método para el pintado del error + return itemlist #Salimos + + logger.error("ERROR 02: EPISODIOS: Ha cambiado la estructura de la Web " + " / PATRON: " + patron + " / DATA: " + data) + itemlist.append(item.clone(action='', title=item.channel.capitalize() + ': ERROR 02: EPISODIOS: Ha cambiado la estructura de la Web. Reportar el error con el log')) + return itemlist #si no hay más datos, algo no funciona, pintamos lo que tenemos + + #logger.debug("PATRON: " + patron) + #logger.debug(matches) + #logger.debug(data) + + season = max_temp + #Comprobamos si realmente sabemos el num. máximo de temporadas + if item.library_playcounts or (item.infoLabels['number_of_seasons'] and item.tmdb_stat): + num_temporadas_flag = True + else: + num_temporadas_flag = False + + # Recorremos todos los episodios generando un Item local por cada uno en Itemlist + for scrapedurl, scrapedthumbnail, scrapedtitle, scrapedepi_name in matches: + item_local = item.clone() + item_local.action = "findvideos" + item_local.contentType = "episode" + item_local.extra = "episodios" + if item_local.library_playcounts: + del item_local.library_playcounts + if item_local.library_urls: + del item_local.library_urls + if item_local.path: + del item_local.path + if item_local.update_last: + del item_local.update_last + if item_local.update_next: + del item_local.update_next + if item_local.channel_host: + del item_local.channel_host + if item_local.active: + del item_local.active + if item_local.contentTitle: + del item_local.infoLabels['title'] + if item_local.season_colapse: + del item_local.season_colapse + + item_local.title = '' + item_local.context = "['buscar_trailer']" + item_local.url = scrapedurl + title = scrapedtitle + item_local.language = [] + + #Buscamos calidades del episodio + if 'hdtv' in scrapedtitle.lower() or 'hdtv' in scrapedurl: + item_local.quality = 'HDTV' + elif 'hd7' in scrapedtitle.lower() or 'hd7' in scrapedurl: + item_local.quality = 'HD720p' + elif 'hd1' in scrapedtitle.lower() or 'hd1' in scrapedurl: + item_local.quality = 'HD1080p' + + #Buscamos idiomas del episodio + lang = scrapedtitle.strip() + if ('vo' in lang.lower() or 'v.o' in lang.lower() or 'vo' in scrapedurl.lower() or 'v.o' in scrapedurl.lower()) and not 'VO' in item_local.language: + item_local.language += ['VO'] + elif ('vose' in lang.lower() or 'v.o.s.e' in lang.lower() or 'vose' in scrapedurl.lower() or 'v.o.s.e' in scrapedurl.lower()) and not 'VOSE' in item_local.language: + item_local.language += ['VOSE'] + elif ('latino' in lang.lower() or 'latino' in scrapedurl.lower()) and not 'LAT' in item_local.language: + item_local.language += ['LAT'] + + if not item_local.language: + item_local.language += ['CAST'] + + #Buscamos la Temporada y el Episodio + try: + item_local.contentEpisodeNumber = 0 + if 'miniserie' in title.lower(): + item_local.contentSeason = 1 + title = title.replace('miniserie', '').replace('MiniSerie', '') + elif 'completa' in title.lower(): + patron = '[t|T].*?(\d+) [c|C]ompleta' + if scrapertools.find_single_match(title, patron): + item_local.contentSeason = int(scrapertools.find_single_match(title, patron)) + if not item_local.contentSeason: + #Extraemos los episodios + patron = '(\d{1,2})[x|X](\d{1,2})' + item_local.contentSeason, item_local.contentEpisodeNumber = scrapertools.find_single_match(title, patron) + item_local.contentSeason = int(item_local.contentSeason) + item_local.contentEpisodeNumber = int(item_local.contentEpisodeNumber) + except: + logger.error('ERROR al extraer Temporada/Episodio: ' + title) + item_local.contentSeason = 1 + item_local.contentEpisodeNumber = 0 + + #Si son episodios múltiples, lo extraemos + patron1 = '\d+[x|X]\d{1,2}.?(?:y|Y|al|Al)?(?:\d+[x|X]\d{1,2})?.?(?:y|Y|al|Al)?.?\d+[x|X](\d{1,2})' + epi_rango = scrapertools.find_single_match(title, patron1) + if epi_rango: + item_local.infoLabels['episodio_titulo'] = 'al %s %s' % (epi_rango, scrapedepi_name) + item_local.title = '%sx%s al %s -' % (str(item_local.contentSeason), str(item_local.contentEpisodeNumber).zfill(2), str(epi_rango).zfill(2)) + else: + item_local.title = '%sx%s -' % (str(item_local.contentSeason), str(item_local.contentEpisodeNumber).zfill(2)) + item.infoLabels['episodio_titulo'] = '%s' %scrapedepi_name + + if modo_ultima_temp_alt and item.library_playcounts: #Si solo se actualiza la última temporada de Videoteca + if item_local.contentSeason < max_temp: + continue #salta al siguiente episodio + + #Mostramos solo la temporada requerida + if season_display > 0: + if item_local.contentSeason > season_display: + break + elif item_local.contentSeason < season_display: + continue + + itemlist.append(item_local.clone()) + + #logger.debug(item_local) + + if len(itemlist) > 1: + itemlist = sorted(itemlist, key=lambda it: (int(it.contentSeason), int(it.contentEpisodeNumber))) #clasificamos + + if item.season_colapse and not item.add_videolibrary: #Si viene de listado, mostramos solo Temporadas + item, itemlist = generictools.post_tmdb_seasons(item, itemlist) + + if not item.season_colapse: #Si no es pantalla de Temporadas, pintamos todo + # Pasada por TMDB y clasificación de lista por temporada y episodio + tmdb.set_infoLabels(itemlist, True) + + #Llamamos al método para el maquillaje de los títulos obtenidos desde TMDB + item, itemlist = generictools.post_tmdb_episodios(item, itemlist) + + #logger.debug(item) + + return itemlist + + +def actualizar_titulos(item): + logger.info() + + item = generictools.update_title(item) #Llamamos al método que actualiza el título con tmdb.find_and_set_infoLabels + + #Volvemos a la siguiente acción en el canal + return item + + +def search(item, texto): + logger.info() + #texto = texto.replace(" ", "+") + + item.url = item.url + texto + + if texto != '': + return listado(item) + + try: + item.url = item.url + texto + + if texto != '': + return listado(item) + except: + import sys + for line in sys.exc_info(): + logger.error("{0}".format(line)) + return [] + + +def newest(categoria): + logger.info() + itemlist = [] + item = Item() + + try: + if categoria == 'peliculas': + item.url = host + "estrenos-de-cine-2" + item.extra = "peliculas" + item.channel = channel + item.category_new= 'newest' + + itemlist = listado(item) + if ">> Página siguiente" in itemlist[-1].title: + itemlist.pop() + + # Se captura la excepción, para no interrumpir al canal novedades si un canal falla + except: + import sys + for line in sys.exc_info(): + logger.error("{0}".format(line)) + return [] + + return itemlist diff --git a/plugin.video.alfa/resources/media/channels/thumb/zonatorrent.png b/plugin.video.alfa/resources/media/channels/thumb/zonatorrent.png new file mode 100644 index 0000000000000000000000000000000000000000..13fe418f188545625aefa84895b7aadc0df31b88 GIT binary patch literal 36220 zcmZ^~V|ZL$)IS{CMx)6jjcsdUqm4PS(Z;rIHMVWEQDZc0)YxvEywm%6p8vOZe>msN zHRn2eueE-Ac}J=$%b=r>pg=)Eq07lisY5}*WI~=dkPsl(ulA&MP*Aawa#G@&9;>H| z1<?IsXg})D``iRV-ZKKq8~VCnp#sWWAg?OYLP5FmLmp^xp#nNSKpv8jq5AbPpr8WA z;Gm3I5TT&@!7xy4>99~x#vIU48RO7UP;8K?CiWojW+*~I?f?IO=J#}O&xo|z4`(O} zj8)rl{MD|%@^#Jn6!$RfFmHiW1B49nNN^;oR9HCN2}IQI515g_Lb`2YGZJKal6RaU zoI=`=&j4)3x&WnsYaBaOU9~)T9#wUJGyi#mR`;H&uBElsO*s+dlAe|I%=MLvsZ*U= z$f6*F3isc2ZpsDz-yI6+A8VizzNRt$yF-bM!a=S9%}x3cS|CFL!UHI%;@rJ|+=PO{ zf{+OXMR~^b`M;k53CREM)|MSG|NRb`fJTCchaXZ6%V-}}<(QzX5`4XXxo0U|3QekQ zy*P$-6-6r*IjrCILz*(}fEg9S!%Go2x>YSg%(P3AbQyE{L-h!TBemuT^1s;bo8FnW ztb*dofKj=SG{0`WC~ot`^i6O)hIO~h^}75J;X7mU{S_-`$OS&ZlSjy2k`+CG)V_Pi ziH0K}ZmnV|oNPcHK70FvI!Z)5Ub0$oN=o8vu~6LX%QMt_Prj(p=(eSY<*V0n!#2^A zStKFDdYBmh;KE;;u{A@>!(jFEV8EX}tSr{xvdLFApd=OtBJ?=E*nWvR4$~%>lXpCj zyaU}n4c{?fX*dpF8Cz8BSmEojwxVzTn$q=-JAVth`a=3rcqL=k+JxryUj+gtnV)Qp z73Fkp-IbMpUHH1Mbh$d8rJ257o-a4>9^H@$7xW2Vy8|09OzAY;n;Wkunf=~5B_eEa zcDFd?5y&^<M)2J9UpwBDWwDs<#HPi&t$L;jKiO}RMEcOKy;(3V7Pq-xi%^D}Q{>cC zJX-mKh5%Gj5t`0W=%ZT|5@q}k_s2qW=vpHwVXnO=J}O+)aS8nO5KwT`5kk9`dphj~ zRwf`aQvi3(;e|A|kI;;tw$4LH?f%pkv#wXr2x@E+Hkj`7zI2VwAcU5h_iQ&BexY`j zzvRIyGoTpencxixVKVC~;j_&g`QqV^jF4|1DaF$~WctxuyJ5UW`;-H|^rHN&EfkAg z{kut7O1vf9MY!Ji5|sXM?9zEh<GR+jGO_lTe5>0N!G>j;$uOSy0~;={+cwq1c}JK) zT`DQ5P-Toa5y>sLGAEWuxRKp$FO=Pe6}Kr-E(cnap-2lYZ(XGUb)R=*2(G$|L4AQ1 z^Rb8UBzTrNwY7P_w6(a?EtvMnP!Nntvqu@x2eztaa_vavT9fVS@`qzQmBU!M>#LwS zaRYw}QD#e+=TXsl?uo>c^49wNs?@?gIXTH+ZLwW?2@4A=DO1k%dY{PTBvsBA={Y}L zX`<le=C&{PcNF+~y8Ls!%bj(-*Y|m_v$HdufJvJz(XxeM)GAMlo}Kx?6Rm)gQKLye z>`(d0cMtMc!iC$+0ih0Gd<-uneg|&bj<#-}L5l6mpKf8K)T_#*_7~x?BAaN$eEf$g zH0=cXuR>O|cK4ls{@ARXe?@%SfP3GREMRR*O`H1s+P_qk|2?Pd0Fm&ueQIGriY4?r zYhG1Fre|fRS8Gma>OrZH${F85?7=fh9q(#OycZ>T>EFp)ryO&AbGcir=!7GmG{%4l z1@r#hXr9kaCJCgcS&{`Re0H$__zzf(PcA&1hV|7oHFrcC$Tzjx4FH#i%k5A?7JY5+ zyW?NN_V)H4&wqE^9Q1hKJG8X5iBggoB)$JGuwGTF+?l&OSyE|kZa%fOv(xYUa<b?d zC`M_Mp+mnnqh^&;adOgcO&8ZA!bhp;qDn#EHvP<AT$Vun9g4ZauVkej^L;5*cK6}l z+<ZI)Yez0ZQn`Vj?f$!tl1!U-&fcA?3<KjPu7r^MU~$lN*<gr?hJU%syvv)-WZ0|h zP&__<v}X<yR^4{NkCH?C+P*cLtjB3ikp%^$%qfe1V8M%uX8r9m88%?bR&-yb8-kVm zxUV~=vj2Eox{A0x*4=!<gD^gs&8s5ddA)mWyHtBt!saO%I^@>r`}{X`e}CUV(EZ%U za<&kpu{WvQGg7uep}`x`>lM!WOX4}d;Y1RNJyd9z*HHAnKo1k{sj(e}-|datipOq^ z1&_!7b><}ZJ5^cO^Y4+@T(eW#l||*!L34krdyk#D0pm+psvmGTG-kK;71JangZgPN z{<`}{MJx5Mns%6Q8#oe*027QX_);t(#W$a~mq+^hi%rKgGgM(b-qX!Ke*w`SDVN%R z)JB5;Pr3zYdpbCE7T=B8Gki&rs_`WhKdM|RY7&c@9~5(fX+0cgBvCLv@Ub<uw0x<` zsjzFK;?i}FN14pxnPswkZ8RUxJGdVS^3kA=x<A(z7_?xOuo$kzr21iZ&4t*^&XMzF z^8C;@hc$gDm)u2J^Rs^gAFq2B)*o!&udUD3rMTZMf@~R$dO5alzsBS8=vha4wP2Y7 zjJ)xGysH`=j__UA$%1n&zJJxY@{}9vl8CV8u&P94ym|CHoGG}%V{<IO3JJEr&HqfL zZjk{$DKY9M@}E*x4Z8vzE!hz<AuxKmALU^E;pPYz8f(hj<2Wo~1d})h++vidILP=O z1qA&z)$QC^bCBdyMC{ZNYWKMML!M>e``sy-j<+X0H94++B|0JVpU@gYCIHEvUoBHm zwsCyBx{@riS|Kv4Vx8$VTt2<;o1y{~BEtCYk6DX1x%%JgnEg6$M(a^(Dp2AG9W*oZ zl^es!qsJ8H`d;lnHuA9#zHX(yM{}mD`Wo7oB~yMEM50<3S?+vxzTPD|LaG4K98{L# z6bC28ntpN9^n?<nO|Rn{nPM_(<|Yr;oyk}NdbntT=cntrA?9LOAR09uGrS@!0+2YK zs;rp)=+-wT6RlQk%%&|$!JzW*vqeEj$GemaNPs`GQdog>#!f({&`6GVQU|%*ER3UK zL-qBs6W88kNr}GN^JDjuI`50;{B3C9{!Y;-8lrPlfbpz0eN2S*DA73L>%b(CgUqHM zcO_XzYoCLWu?l2ueRQf(rtGUvFNU;8C=>VL5h*%`K)YrU!zK)5G=-}pT^BgYW$(9j zHyq%-aFIlN0v`EOH1OU;hI1)!hi>P-&oNwEMj9<O^P6J)VD-A4!3E~g^?sOeY>ct| zxNwqfEt2myAFC={cO0uRjYxdHq!=qspYr5OTLB4YXXQg;cD7&y`(kWfI#O05A9@mC zzK>lf;D?WohqgGJ6CF$t$PD~hea>)r8HkhlDn=`d=-hV?j>6A)qJ21oMZ|~^^?!T8 zOofrk*GWlDJu_-^Sf5`IEM-05#y3aSf=#qued#|DMka@A3$hJ3I4Upy`YxxiR*pop zsff|3pH08ENe-;qbM?p0(v^Y6YV8I^hlouZ8nS#B;%7K^XrWLyd&nDo<->=r+eXzk zcmo*TP|ajFCdp+DnSTAzO1)u<@HCFOa}j*xxdfeFy5Nxb{|La#Ozf{fLL!Jozva*E zqoq4!jbUerlMc{#VwNqTT+s{(O=R(|@LB>D?&?Pjap!BHV0pg5G&bbd=rmctL}Kwe z1rTAe6owf7FA++m*=b#3`{BsQ@wt<ubf4j3&M-fG?GimA#BuqEj6xW3gIG|JZKh<h zmi$FP*tZU_CH%~2IX6Guz2~G@P;H@?fWXl!r}WwdF!nLZFmGcWu_Xu)r1vW=M|dBJ zr^wgVst$-<J&9=u7bkDW_9z}j`+$Z?$(~OVG2sCctGBZ2K6CQ<N#u<n1bq%KEGD?6 zsjyc`iIdUOT%+I7XyEDj*!rYR<v?(G_&XypaW+<D8jJ(@+Vf6VMlNO1Glki`fp$sP zj_4P9f)8!>L^@jNyT%I)x!?YRKwdM$RV1VFk<H%6Sw)f|{`1KB?rwUya1m<zc~wdZ zO;muhFEk4r<5gCCfXMBW!;H&Dw<r0nV}2!rG#W7n7$b82%`1_9eD5v(AMmL`7VFT_ zX)eRdu3BJrvt}Jd=oN*ktR5-M9Vxhtn^c&PoW9_<xkiW&p!|_43FD8rD;Bgw3wbfR zB3L9Ee`R`0@NX`_SjZn<Pm4TgW}?XR#$TeTSE|&|IIVLXJ}Tgz(2COgQ6x35sbRa{ z2GpO?f|qGc39U0B`Wc_}@5QFkAW^m~@q6#W$q5X5<3~<5wtt8&9<3J4i;i3oBQ9$& zrZk6M4fq5M3etiGeiPboJDMx8Pw086R}b(&1yYv#e^-{g=+Ljt)09x5V#nE=L@1+* zm|*8wV>sO?q=Om0c2B5awusqFm~jgx&^4OrB*^OYB#0yPmS%+6`220q(XO#(5u44$ zUkRJd5FKP;<OFMFpsl@p<)Q;|b}VZxVqTHa5Zi*|D6yr+f9(YZ{vY20Y8HDfV<w9z z?<w1r?6sGw9h!;oGHg7Be5Uh+VNEzaJJkcMP&H?E--W+2kjSB$%mu)5-Fy<kbW2=W z-U<q{oId<r8dMd44!30PUy7Hp!M}g?VLdbDwijSrts5~>`)z%;NQUqeJ}Sn}08w_W zwZ`%J9)f6jWzS^E=q=_yz){e1oP@z-pP3IYYqFL8#NWgFjfOO#isk(0M`S2uqw7ir zSaNS8Kfz3YJ+yba^@~;qBM@AE%NZs*K^Cy8wi-OHw5w%JR0y;t2Xc%A2_<m*JX~g5 zMX#Z5X!m&Ct&zyYi)b?3TfKn4i<O6AISNL}C0etDAtq$R<7NI?$;%k^4%`nA4WgpV zW{E+j>`dD~+30=*%qHKq$wU9CE`WyJ2fdgUZ+fZqSxH4r^h5KAoAVD0NPZD%Qlr## zThY(8wPBPuI~%wq<N`E_9wI2T`fGCEH;nzDA6{mP>cb59poAcV@dK(WPWDURSMG6o zQ0XHPxtOVlIyfp+kKhXmimns^XHma_8{yevwM{b#eg@m8_<=NQK}L9_#xSI5E5Fyf zrL_vhMeO1rZAS@>e~VZn?-l3av9Vwv2#IFOw%yOtYN5ALtQvhyBqjvy7a2nxhAw|D zIk>3Qs^U?{w`OYBYa%N_T(smZyym2sevg0^fBIQ0m%OS*{L0|CmMNcTP5!?uF|9xy z2e8sy%bs#?91=V%gyjmlJCx#79k(qHW~0a9Wd;r$-QKzu+3|?9HFBf!pMHysXzXyP z<<`L=&8CAdeM}hjK|^C;VaYR6B5=jv_lhb0RC3<+^^9S#aP({=4|etup=A+Rg^KX& zA{j4p-_!#t@=-RCKCZK#>6(_!8SxEN0tLOOO-E0^J@`(*kl9grt(OAeWP9MC*)#iT zWgdv;o8JVL`1q`EdnkPp*w4>4>0kunJr8c^_;FRa{}hGKxUu)wu$g)>f?2PXNvd1u zYsW*tdXTclv=3qeOs#sKiaIze`hT`SDW8-cfhTSTQoSI<WjP*YFA1tPrRwA3K|Wzs zRr0Tx(pEPXki}MiQ>JkBr`~V1yt&ihb-#trNQd9i!iir1p#CfQnCJ;WSMC}%zQc08 zi8Hs(#KAx{jY3+r*nZ>05|KD^pkmaLvQpheDrDO5|DBfTc(75nFY1H1tAgalKw0ey zB<!Hgor!_pUL`=F=YS1%&zbU9?^{>uv@6BeH~FUM0I&4uAQBa@lR`w8VcEZXc$n_- zHR%Fm`0FLC>PH#GrW;`17hc%l5VjNTnR`T`f3BK>KMvxc=a~F{359c$2+%+)WRB+g zOHW1lvOAW{Di>(|0W7h26$<;UsFcfMp65v+@#sBAQQ<j~od@lA>B%G-{AUvS<Z%(j z_wC<96L+DJv$MAQQ@LI3S@sr~SNYuLkJo#wNvN_IhsBdI3je#QRM4eLHL$U`NC1%8 z+{^_@TV+UBv|A&qF^A(RL_s@k8O49cZQ8!20OsF488}>cbXMJdoy*mHLbK7M4cD5} zg>Y0Z26duo%Q9jDSxt@gyU**IJoJrGsHM=sL}pXSYWDfOImcO}zJthrh>Ztu!tmii z2No4jtv1T`Y?*_>a5o1I>zyqh;r8Jp`DTh9ZR-qst}85C$s(3&Rnz`K>9dcJ(XO-T zA?RS%x#GTK7%&eQ-+c?s8j^B64TXPw?$e?d({}lKvGKYWL(I9TzE^=2qpGyQEqt3u zDHWyY`hPx)7@rly=6^s(pG^a;HX8W(^}+@n>W4y^&`?D*Ha1dNM|UdDkTlQL@j0Ra zk~e|A$yvHyl-zfR(q%Vhw*r36V@JPR8R@Mr#M4hM<H_SU(_;Ody8J1#v-D?(flJNc z+SZa8*TSjNG72@g4|=}A>k)pt2MI+;MAH;}?pM#A+eA_pV{v5RQW4`*xq__Vm`OO? zfgNMM!?LS6MTE&fbnU%km;YTZUM3Hg_wt!vFOd5S`$XhYKi=crpO6sujp%rP|E77J zm)iM`mfnG(OG#7GKCwmjfICwiw{YljfG>pw+!LEo4HjiNaKn=r6Nk<qU@g1BT@o+T z2>6jgmxQ#d#7Gv26@LnWQw-a{BOgY^Cd0`wnf7M!$572X15Jq(_T>{`tXZ2jAz89U zem`qSr1fI`ybK?DMDz0{X;2S6J))8QN@F{nz*8!8K|!>YW%iLVLxu!?s@%&*%e3|0 z)4Iv*x*hf>z4Q}7P`~p6;&D)ZJ|%}jQzJ``p!eVO)u?euL`O!%=d@*tlb+nq$A<G; zf&H1*iCgy9)1iF6;?4hv3@_6Jn1v(mJp;Qtn`ZC`P7<7gLmBY6T(~0`Z=$PusYXI_ zvd|_tawiUK+bKtZmXLSxL_tA`Vj=d{@TWZsUr&&th5O0s$p1m`MnWP@<NcBgCuOIf z+HICh+0_e)NKrIA$mI5({XRT&vOjje4?-a@Xk{|1M{HyTw;8_1$H2U{hQ!$D7U-{k z`aP6p73ObD>&nCNafdN2@+dOiJ2|`=9fl~Hw3SE1lpb{M&PbOoQ}UJJ=OSj~e8ASj zaggg4lR0}lb+jiWX4H*TsFq^Q=doY4b))7E6{%C(60xy87MBa;;kq}oX5m$PZYmk& zBK!_3J5jbeQlhR7jDTgCiG<U_ID|zA@K!X&GBB35Q<C-2!^YY|(<ay>YG9f>=V-KB zX>7%ZflDd&GB$DvbHM7f{`);{fn^cJEpd!9{gBj9@lRs}L3wQH#TjJ+qF1mTj_vx< z&7t!9RB7LR$-5{TulBTSpFCtQ69;})msPE(n!pvqL;9-*MhL##A)o#L`6nXivt0ep zSK-POA<01(AtD7+7rNZ&H(<v3w`oem-}1XNp5rC1ya{lNY%KEOVWCRTlp=nvo?e}W zNPHI9<uDp&LqdP9^!Z#bjbne%Oe{@1t~OgEn%RkDC=)-zbbh)mxNsw!!6**;M2Qd* ztg2=FvuAvA<ex6Acyk^T#8?(rz{#?g)BFO>8#LiX8_g235-%eqR184?cOZv^#*yhd zrd!QMWBJpsT}@geOz2zNP$+Dgim%u+j>|M)W`pvUXOp$&12!xW?X+Pp5$Q8DGX9-u zwhG~z5DtKM{`nkkbw0EW-AYMxZSF#F_Fcn~<=Iv8){I=Gr?vcWo0vZ<*j?=t18B$D z!~9E6##`FZ?tz`bc-k6~8EQY1(#4i6TZXBU<4p9=Zr+*ab16U<td)w}w@m{{=guk; z&VgK@07d(Xsi&!ox_Scy8?g}zUkaw_aWYD<GGs~>uMCWnVrK)OVbf-7MRDa)L}l~W zm~YA6V4a{5t(0%t=qKHyzfS!>z1<S8K-_Aa6){Umz$cF51dS<MB{L_EV1QKzV*T|A zB=+qF->|n3C&os{EgEVZfNJ%$pY8Y+q7(9@`r)-h1Ds#T-bAcZ8<$?|Vb#Dc4M~no zJ@PAm$F|rRLJtzfI-vEHZbm^DE6BQmeCRcY_mQ{ny7QJP$PUx3y5W*UIa+WZT<!aR zi%MT8ZKe|=1XZ34-Ig(xvg`<D{@6pRT-^;Kcaz+VG&K^)O$G~yUiDW{2mWzmxSV;k zRT<<;?!EEt_f0M;EZr$B(8Rtpv|(iNhxtg!=1H+Xo$sSwH0PnFHhXgG%L=noZm{Lx zMC!{c0LANMsH~5p7^Qgg$m)FnL#FlR{3=N?X)HSgxXTTsQj|@B5@+6mO@I<dL88O< z)QqbBsZn5KTk@@NbO+`0fGKi>VfwXQhpcm;SV>BHL_f%onGlN#ZkHG*i9hn^k$>Cf z-}5i$iX5DlmQ!+zCN-Nu@Vhxp2NS1{xfjoW+rCir4|;T`Oi+q*eHZ7lQpOI3OTaTu zIvQs^;SAp)j0zc>noa}l(6lh2kn=j|<7M9_^a{frsl}F7S63geyKf42J8ciLVvq}Y z6nq+FpG5h~(%a*HVR+`ZT(KJAW|pWZ&bA3PKLBJ4>b1N0I5tW}FIO<_^dX{zT?Sby znCR&q?&&=f$R`Tllep-sljyYmCV}Z))E_)}E}GcCKNNNC!C6gbl$~_V4`Q`c)AbQr zZ-VBkodTxuQzq~Cfjd85y}V(wqdmF0KHU>=iIyaQTS~o`6Zh*jjGgim60YnVJwSsQ z$GGD*HTvqe*T;?KHumNv?m=>buQRs1*+0{11M`zJL}e3y3*npt9e3xF6kWePLlF$` z?A2LF)l+7h^efL1V~YXAxs-CO%RizV0^o`p|EiLl=se-j?Fv8LHZv6tiQwAO$R-=c zkqLgCbZBcznKz9>n;_$~n2@oY&g;=++<veKiQMO$7UGQWU8o>aUXKpIg6A#)CmBy6 z_QB%P{anf(PL*-Vl!Czl<?pDHL{FkqWg$*@B&FI#VL-BMB5FYWF3}hiO#k4*<{oix zldJgHb!zlKmMkb|+0u+Swc^vT1KG&PdWWumqp+OGVNgOt>1(X@<$x)(T!dlsIk@s6 zR!B6bltJh2Oc8_}>dG^D;3gzI|HVqJV6I|!Y^~<{vYg#(03iKyI*XP;I*Vs+V1MxC zmu62rnTs#vH580eCtWCY1}G6lOH32V`npNhEN=g><MZ%KQ8FQH3(8~%HL=%Opo|WO zR)j*2phwPVqYn<&gN%p}hm1Koi#p1){ZkOzQHe}E`H8czWAF;s;z2*&aH&%E+b72D z4;J9fUSGt+fr7Fw7U)=U4thW<)C3f+I6EDlZgX+Kejb27uUIj8wmx|m7{sI+@KrXr z^7^_G583Q8wd3mQV)mecf)2}TA*`JgmlW}RBK2V3uTr)hAs74L(XEd*I8ow9hwvnG z1zuzfG`8Cr)vEVM`Obzlr}|ef%pY^UiirHTPhSu*i%195L4$njCBu&D>w9gIVdYDo zYt(HS7<?F<bs(`EmHtkbG4!Mw?orbkg<BC)O|r;$X0B+jpcWTvxl;aFUnta_*&9?` z88P<GJtUhnky1ATN1X-O-^l79;;SZtrjLczo4@1}XoSVw`cN@23<nF-B~~5qV(;16 zT(<fH=l^A3qndja^;wqOHAn8)4kwB-P3=oFX_5uVnc(UY>d~kNPG!yHaC+_JRAm*9 zWwK|txL1+gPP^1J9tr>^A9@G9#OI#fpxQ~nQH&G9QtZ?-9=*wc?H3$*l6D|EtS{Z4 z9Mlh4TGIuT+Kj2aTjG4wVoW`v9M;NRgtu!sdA(AJY=$7}jL+B|)ywU(btK2sGI90y zkKtR)UT4FBrfsucHgd(jXNN5hku;h6()Z9zF=Y+1wE0A_HvE8QrYGi-rYBEb;GZ3j zK`}c>j)sL(b{~@ewly_1-6$RX>2aA+kJ2G1H!j9zx>&8_-~V1$nd)T7*{{Bz&Fi>9 zCCxdss>rE8n@r#)^pNn>`W6K+^=<CTXn$_J!#~@2`bJ|FU3vzJ3Tb%w8MV7BnyiUA zclv_6|B1sIV}OO9l^$p60<-zph{<{j`=E;p`v8gfeu7MQ1y3NNII#6r_wUg6?-rmS z<Liy|&)5TQzOOBtsCSSBjuhT03{(LR;^$r#u)#~IBnM_~z$WMabiw>6a2^3r?+pqB zm=xYOna={V7RaGGD0Nb>{0lP#zn;>m<CP%vX%+>ZE<!T;ETzLkMPf!Xq3Z7L0X|XJ z_(9@2IyyV@^75aSWck{N`+eJ)QzU-FHi~sGv-ig&jgi!7XvT=b-F=j^f=!YdNTFiL zQjgc(f;v+hYy>5;ksBFzt*AC#$%qjuky6)a!_97vP*;A{PjvDVz4+xHMT5>kJtY9h zrlQoJ$9*JMb(=bRQBTAg>V|fx)!l0|Htg~G6?&qhqy`BKcYqmZ;-#&dC7-64$L#bE zMFK*vxDdt`q+b&pW?#X~1?FlpGe7gwKw>fFKaBtU)}$6s0Qb-B@$>ULs>$`M&Y(B% zFzQCyB{(*%oEI&bS~)vkDIBWZ_i^LY4#>`oJ|eDy%!y)!F@ed&D-(2?5+lK%`=yd< zo`-sCIN;|Ejxzxz%3L!3;)!gu6f~}c>rDkI`(J|KYQRyLUu9u8OCSuQAjWRpd;+lH z3MpMQ8GeHTTLY$eR%R7g!O2U%hXc~|xhRlPy`@wpYfv@k0Qr?gGOuY9@uF`Jc(n(x zQOlyByDE4t(H5x0JVQYR4pNFvOk)$FSXmB9vE9hiQ*nq<Gn5zylua{7BQ=D|zrVd| zU<+4OCQ{0BltYG+qGFChqeM$29{O|NUpi!9wz6u!H*L^%ljXFb%l2R>BZSl;ByHdg zB(X(U9(W_k)%!^z)+k{G=&IrM1Z^YoDg?^+4vy=G&bo=nu+y0>p7i8uDgu}&_SR2s zFDuc&3;~z;fwtR}vaUT;lnBorDq6rCJxjUobgIBKuF*KY6Rb%#EanJ{gIH_px}!wJ zG=_JABu|BKuxalLYHbOeA}n=3tebD756hyoA4H7Z|A{fffP-83><`TPyz<v^fKsK9 z&CYPFFBPxof-Dx0_}g$J4-3or7o6j_tWJ%t>`)kD%J?}O;eoA$goIgA9=nHT>A^c9 zXxz!5pdf2J?Cx~`$jEir31Fu7%B#_!<P7DE*2^DzqyaZVG9cC1Q02B{REYB<UFNh6 zX`?NT#%S1&&CyA858X{kIZ4a6{#8|M%iKa@SamU$_ByVQBX#6izXF&aI(|GE*XkdA zqIA^KWD`Q!FOUqc-62^F1_6(P`wb+<D^=ClAQZj)4@Ej0D_|;uV>9>oBZ$;C%F{p& z>U)!{hTZLj(m_OHFx;8k7~y6t$f%o`o%j)Z$?cc$c=Wf5cIelkhTf91-yL|ea{P6u zw>x1by)Sy&5&=q~ltvE#RQ6=BsoJm(!`>EmbB{-ztlu<AW2uVX0g^5?4O7cqY<kjT zPM3i{R}%4r#&h7gVqnKE!PxA$aZq+r7!j65v2m1@`oW~Qz^F`k?8u$`K7E-6bEuUe zNb2TKBDD?u^4+Y=aZX{O0F5i8uJ_Ry-S+TR1mhOkJwVEs&<KLbY=r;8q!@=iWjB2@ z)0~vWckv+9J}iEXNf?E6_rn>Jz>i}K%?-%g!nmP$UU593B9N-X-sf8V3RR%K(KqAR z1D(nAHF&WUj*%$0?=&~BY0RQKRdz$_7-y1LU5jB7iju*%H5enw5Jd?Qk`J~lZw(k_ z$qH7@n}fp(q9jjHm+`H;60nJpy#g^z<s>Q3Q(5mdHC)$%L}1+u7L2R<j;_aL)AJ|C z)ZRY|>b=@pls<h7mo!Mrs+>|${<Nf=z29&HyHE3pv4_k@_Tv8Ip`qoCIswH$@<^fo zBhMmLQK{!Mnz|EoY}}}W&aQM$k^9-IR<$e^Ad_vZ`eV|HBCdJsuVe{UxZt&BYfgu? zR(rwv`h9QE;1`6T!ootG>d<5>ixTi-Za9pkTuJPhMtXMuyS!((Dj_ONPu!V}DFV z2K40<Fh(gHydU+Peq+1aQe>GkW%Q3iGwEGswqOnb>`+X^Lmo#o7$&}^(l9#Od!a5= zbRpmsrZDqejLX&kWbv2>XL8Tl`uNIzO-O$KoJ`Q+&u_f}7dCXXqNSMu!abd&mD`zr zww=8L%#)c0I#=IYCa`Y-h=+j(b%};XM*7a>IM!p!^6&&UYin8Q&Nq6V>mbRdH%!oA z3lG<LI_o>!l8|nbfe9KCdoWnN(g)*L89FXp6-@*h7#y<!yA4!QE|y`$M>e}v;|O1( z&NQKXgK#6A5kkh=AXBHxg3E{!=HfT8bw;C^bNOLH_td@J)2sH6{Ax53NuI$3?>UlO zHy2owtk|=^F=q8QD^qp9+?2|U<=kJ7m+)QCUSfE}9kZia<u%-d(?bAA@IMktQMAVt zDc1SRfJ5s|tw-SO@v7VB_c{m3U1?RyCQxP8XHA5Jz+C{7%Rh$Z)yfrMLQ!w{zrS)H zOyx3a0QFrZ0~o^A5FsUBxw`LSQ9mtElzM|>(k4VpN*fo2Igdyv7P$azh=8cE8Tk)Z zzQVls0RWR29GV*$WNG^?5zwHguYSfIj)!~`uODht$yiT>=EuMf4O*$u<%NbBJ}TFw zQvA-zuTyQwa0gUbOog6Mg`QSF@us^H#6<){!pKKFc~zx6a6BTUgcC(C_5Pof8vc_K zz^8F|Fo`80lL8M}0*FhSdA8b;ebEBOc`Ke7z}0KDTd9?8cUWN?lX{<HXm)+#ZW-=S znJ4+UY&9mO`$R2I76%Q|+)1B035BI+F%=+bmWcA``fvT58Fc}lfjRSf^(4<E!u295 zEg7MMy$`d2y($9p`zEGU3~o>DcNe227NRark7FNyH9D-nrIN71o?g@jodiUFe11iV z+>hk3UqB`0@kp6qZFA5P3=M5SeE)K*`-XE{IBh0W4*?z5e-0&kHkfA!qWX6Ey<VU0 zFvY=|O@Rc`n=M0WxxKQbg!H%No$7~0GVy+Q3l$oh-wbbB6$XjGfhTHRZbuy9pX^1E z&Y|eL5hkq0@OI7l4hKj_5Cyc(9j>OAln(@>LfGX~6iFlAE*2I77Ng(Txl;|+Y>jb< z0L|(0TbPW2-6~0inatuWaLyVn2?%{bucPA)Hu?q4yw9(v3l+z5Qj9C66tvl&Gr-s7 zv^6}UvHo?P9<xPYzW{jg`;btw(p*0aHDm?%1a#9!;Z|VNQNh3P%8Yhw*JW-&m%#al z)ik7xo$n%tMYZy~-MNT|p5D5ZHK~F+bVE+DU^<8Y26L&_pvp~-D4Yoaji~>xv%fki z{qC$?5%9(>o5y@%;hFfdyuDIrrottR&V>@6F@cZ~XP91+W9Zq9lCerV(Mg%2aFWqV z1Uvg#P|VIO!Za$W!d|O|mC@D$K3*Q+qVr{c;vM(Lc)owFUm(1Q3|5VhWZ_t0E<P|T zU5CqcB11r9Z@Sg0V8gbJqBwPrb=LPBg+q)BsS(&@UHBg{5@UAi%+XRmI$M%Is<61p zB~Y%2ar78HNFK@-SOy99t9YW2>FLI8cDNoGriV0)8V|=1+g5y}a8%46&?_aP&m3uL znnt?IfDOr-<@2*Yynjp<14wnM*M~n|;3My(&}9}-Vi-{8X+SWyHb-jH`zA!R%|hbv z!}pr-?Acd2n&7og&E~h3;d$$(9sZl2t8`c=BV%+VZM0S60)O57i#rKfg^S9ojZWoD zXb4<O_Ihvrr?ft#u(@a9*bPT<znGDv3F&}ySZ$(s9bi*MD-#Yj02(=+#o(z*AqZ}4 z|LN|8o0yo`)m*N$uWII_Vg}nmRbB5>m%?DiZT;7?)oEwp<chH7ruur?3E8oih3#b- zX6NBfYXdyfknV^t@dr6mZT~|EM6tpA-TSwBMyzh;^9m=qaSZQCUhiu%#jubTv5FsG zh7-2%^*;zMf`X3H^d8#qkB*LMBlbn4xO9$lWH>zqiz*vA;zi#FqFF2FadCg;ZEv5t zYD?UEvYCskTNyf7$XU!w{fkOQz)9Ay*oaMyUXYm7le3u#2h06~++bmnS@r(3UA?S@ zpWhqIO45-zN)NNIr`rMj?Z1)<WW4n14lci#K<bZnp^uuq4>F9%(sHyUc^4(+Q&uym zyQs|0cHB6--#;&(3~X*bps8pAV>FV$Zx2r4p?s+L$OpIA;#;UWH~3K@m{9JdWcG_& z=bXeSr$EQAfRz1x<0n@%r)vF<Yv<3Pp?hG6!{h#?NekTMq7LgsZiEH7Bv(_44JT_b zNAb^-r^h3B_+wi+Ih*`Tr%#OU-fW?bC}-l0G?_}I>xlJi<eah=8V)+ZLH7@1<KwoW zr3Xt@G_nbH+W8ZZhADn@s!Wm68Np;B^rO|5layaopZ(RDO?-M?AeGJNFUT=!NsnJm z0gPu4u@-7LMz^^7)Z$`3Qg*m0?TplB#Q>8W92(9Kz<;rHfh=DcuW{lm#;5orhr2$t zSqt|ddJDp-e9=wEl}2-&1w(prF2vc}r_d|-OuaUTS7N|x>Ye>`=UuM*LMT1`vAh)L z3Wg|?xnG|8{=4oML-jvbmhtB+MQ;cd*YEufVxrLz%k<5x$Ed4;KsAn)3z8tdRsW{{ zi@?kPV`$Kc%dtDipg}mtG>9?;a(Z3wjy?Xufh(sH>+B}*<tX_De(FFG#s>%got>S< zoC%5?DOJvW8a|pYQ>dz}boi`5RK=T;mX@~jn-z(}{iEzQV|#3p;%R5&&v_PJ1cg#E zV1^=xzv2#}I)_ngv>tW0yH}9lke8w(uwDcxTZ+ECRHK(3cmJ%0BgBLGl1TnACzeE+ z&5laIG&*2;3G?iKcjZ;4d<-M|2PJ8uC}G028LAvlGf(8~%JRM2qv1)g_$Q&bI--np zBQwnbs(0D}vUeIUF48_`ofHw3v{^Pg(-@cNe|A|~;h$7@#f#1AlNBM@fVM%>R<ig8 zY>LD4LX(BUm`STGKrHznla`WF-R<+}0%{4lT15Owf>$;_MiBf8b4xHkCoR8U-QJ$s z9IVLRZgjeBtyx3S6;GQ!u`4EDnvEFPdMAw3j!!FZhPoU<DQd2fEgZEUMBivJnKdFC zqc`s@5(zyZWLRfBS4c)1=_rn~q}uty7Y$AV=TQi?Pv!f&J~(}$Ro_!cbf6$1ek9EC zW+74--1x+#Hw%^?l-o6fFBIeRe`Ol8`|y(7!x<P?sJK)w0g(Nagsc|<2LmmIhtLB& zoj{^<jYLA?tw!M>3JM7L*2vn(i3DhjvtwrxPcuOlBvch!L`sCigOMAR=4lK<Jn0qn z>w7D6&?*x&SpOBf$3MMIx$eI-XK=UJ)}8+gmFO@&4*=m@3bOyOIy%dFoT*N+g;9v6 z#=9vxTmrK(ph+&7Hu{u?L#cRXoz6?03@(UP0;R{YMz;k3DP9qNt{xCc2aUv$rC%bD zvhP)tjpm%~;DYX~<<V(W($msJ%pIi?9}RK^Ugf!$Rbr(|Q6porI$}w38GmhYPEvKd z?6Fql-7}YzQam=`Qa?-B7n5FM{YWYcGOjcp;7>PlUpFS@F|wu<_wji_@CvY&F05rC z?`+NHoS+ot{rps{!)Yxb^+!J9hXE1k7^%ND94CG;vu^Rz+r)zAk!11D1Fbq1dojG` zx~^w&n?e#r;!xwvQ8(z+zvt_EsDKB&MdQBp-Lti}aMjyoOpn<spD;uWw8W!j?Vf<Z zH@*PKtJAfYH&yq=It28Qij|gLh=dYj*e0=}kPb^DW5-0~6o=+ibNbsSJaY%hwM9&_ z!z&QUX&b$6i%m-ixkP7_NVw2oaro$mo$ZD+Ar9{VEG#%5GAZy=9tPT}UgdzFK+`Bt zO>>_ghdNyXIAsQD?fl}n(aldsO>Kt#_5qe+g*K(P7TPCNSMIaw?C%>EL-%+x!9_)- zLz8M7G+~c6L}naZT)CJurPDC<IgiRRFyIPHeJ`C|<eDJf^v`t8e3{m!|M`~qky*Dt z`$iwU+Dzd2c>{QbJ7UTic558{tixD3c}v8Z4C#;>m;ig!?s=t_7dGnVQ*GEU=6{Qm z=`s$`e`!mzaWZiSjsv*kCzh9&+oHL!L|dT<&AWbqE5Ci6$JaQF9o1_ZA5EnGNN8QZ zS!%V`dUkN?5n-75;&DjBakOn!oYay;#i0eVU>1y?P#JjsCg^K)@^d<IP7)v2&trKF z5JeL;fxlH?lSG+S3CqgDH5qFawL*R6x@))Ee561Ux`NJvDLILsK^!3wcX^dq3W)0X zP~ozuGv&ikr)K$R(Eq{_?dqzvv@GCW%LQNYmlYd+J(BJaOJ%wT{9}Wn8U8M?uKr@$ z)JB{4QJ}nHL#8jj)a~UpK+uU>s&vQf4hG?=s0ku(2bD_m#H>*vr>*`30Q!%3=77<K zg&2civrpnwp@z6{!D-*a(4*<*jQf;kD>TaRAF8)GNJ$)XL9`TF4+tLc3ww)vVWXOO zndAZy6FQVQ-38z@70q1$Jp;or|I6E_RGL2!^UocmzI>Nj=(;b>{<&IPP34Re1r=4{ zFazTRVs^FwFP62VF<G24I7JzVTZx`(5vDLadn3CfuAOe4b_TbO{_i8?+3&Zvzk}^F z(VhYqgGQ?*mb6r827LaWuKZe{!$Tnn)_u{nh)^6EIfh5OmBCDWv}|BfUq1MWH|DnN z#&1(=XF6P~ra6zVS>TEXm1Qb)FbqKdyM||CVp1i?6cTAFZkfoBEA=z9j*5d!9z2ED z$J^+tDXw^9*z40F9AePT55So1Xj=HT90XYY<>5BPlqjdBm=^Z=8O?4Q6L>7W+2hSS zkVLot9AT;)ECyB^C5w<~q_dC+MySTPaX%?+PGuag9?6OKnoz>3kV&+Ozr6h&YoCcU zD9%{mutC<&Y}6|xYt0N6kXCE%qxQgtgL0Gv3m|@CHU1_l5T*Ee>kX(!5E;R{1*svU z52J%zq%uMi$zw%nRpsfuVBdogG2>VljPu6<Kp;3XIBH`1Tei<{^9qEZS**-O&ZQB{ z;Gg%UI8NB`v_c;*vKe<TbD2sl3gf<2Q06coLq4~wOfFI?(xj3va!SGvi}|B#WpmQ} z=yE&0hnWx(p27VwsT|tf0X9MXu*kg2$fwkLBVJsVHaNQQ_B<<|D;1&C#LCR;SCC_~ zG&iTw_`dNTpe&SN9HDqH@J}Ek8vu=43$vTCe9duy#*vHm<%@d(!k-ZWwnRv4p;C5M zLwXbTD86QCSVTIuK_n>goi&u?)RlN-Cmqrl&uVNsFfg~rxo|8+MV>boESfyZ#J<Dr z>f^)1kizwVG`#OnQBgttlk+^~hX&CDD@(fvZp_$?B<8)GoLtebMd|O;@e|1|r^^jh zOe>`!3hBj(y}fg2i)zX(DVqgFLTW@bnef{4Dr(Ac#{znZ4hMG`-HflWtaXlbHw)QP z7QO?0H>{(4#iL8&I&&CLvA5KTk2pwdb9+<4|CtnF(0=|{<7mq02S^FbssI6Ilz`Hp z)_W&jS9rfa5Z1Qf%7p93Ua$oE-(MdRvY0e++%SL~@6c}yIn+xy=x{r@bR6UB4;m79 zxS((K&Srl<_P@i@NLBQw`}wVTxTc*7>_Go^T<_$r@8-Pgpbk@2FwDh7VsrLmnBgwB zfjFteEiGj#`EN?&Y0T-{x(SwJJj}5X2_h1BuGoMC$YvrJ$K%9Mn^?n2Rn*_-ad^od zJ>6K}703?_SKtiwnn2Yy%Sm7QYIpB(nMTQB3nfZ35MwJP2kD2Ro&Rpa_dx0q=>3}{ zR&*-xKWZ@m^YB96K+12A$#y#6pMyxDw@PfE8k<#Nv5UFe8<u~?tWQk2uzHw=?F?rG z+WTmh5BOy%NJvN!q<%t!H5f3mH#gB7WLE(zm$@$d!phv&BU-mzzAP^}D9(`D^G!f> z5yyxu6?%FP{igl(+$mO&-XpUWVc;9>C8)U`wsYtrUB>tiZ<dOh@Q|ED<v|z;Go&W6 zDPeD~xFA1dn`Y4M!QR`grpQy}!wX0#8zbaISDO$L_{3zMC-9~6Sim#F$;E#nRx|z) zwsTJO_ByJ-^>?(C1)D*eEdz+XPQSw?uTkqAPFS3D0Bv)!p3Md0Khps53Cz_LS-tfL z3~D}K@F+m9j5m&kjNVE=aU=0YrAVMlB`g|uxmAp5UX9*H#KfRU+4#5?AUuB}?|pys z{<Opmj|N{CXz}O8+IC`D0f}8e8VQ>NvOk!mN=eH-yTpT8nea2<feDRkZRsw?lzEG_ zxhe7nb4LN7gXKQL0Km47L8ab3ijw&s?2%zgDC!S-X$r<TmlNR4F_r$cEiLi_Mh);> z3*mg@VtGxhHF!ki`LyXZ&0#(ZlK2*mdgLYy=aRs&c43+E7qn1Qt31&U*EWC4cnA1T zB!hi#8`^kDXZbO1@uc=$JswVrny0>r2k$5K`|EwJsr!lkJ2p}@F8)99=YmR6ow^9{ z7J4*;v!iLcwJPSk<k?-u3wc_bnbcSOS!7}U9n@e}7f33CMol$>WQ$L2hHEXfLb8tv zySuw{V9@_ob^m}2HP0dA8RTxHm(*c-U3{vdLv5b^{C80ah#FTKmfx_RG0yBbC`0;P zc9R4@o(pgj=7j!!j66@7qFu8+O_`ac>vH6C<@$w}k1tuh`*mPboHONTvlzin7X3h2 zSn6}cs;KI-J6CsZY51Oa^EoOlAoDtm!ifAa`Mb!?Ub>Oti0GRBoQC>a`A{+4KCup7 zQv#r0G3Q`-NUrj^)$esjEoFSGZcJdlujmV{m$xI;zy@wJ%yclNKE>3}rT&&X%O?@3 zr2dQ1q8=eFhnQ5i{{;#ahdCj?BLpaCbq3EcBJLsvmxw`At*uPz5>{fDBcrW{F#o`^ zg<#p1rN}*h2RYK?E+)lYD;E6PkoI_UK%v!Dkgftw3c#iY3wz0~FiHH~&-Rb`jY|#I zw;}<|lO*cEYX*OGET(};j(meOqMC+Er&bk7%cZNbX2?n;=hpe?a5c3=kE3~sd6I<` zmz13G&@4j&F@JOkM|()08BZbkL*AS%#FNKoj#$F@#$GNi(ODR&+3^hV@;23;?b;3X z+nsl$NnY3OGFbQFqD&odY}h^Vl4TB>DvrvaOxHE%r@3&dyw&18F#tET8=P`jnVMdg zgORu&5$I|fAO(M|+f^z`wB{6E8DhXZf4vtp>}x~Os2lt_{lEaV0z^w_o9f)k4@EIr z>q@~LVn8;pPH~7fX;=k$GAb9AxGto@+?Dfd%fdq;C_2al)K~^#&P-IEu($LqdJlya zCJCw%0X`_vS`8%)0A_k+n37Z3?9h<^hNOe?h@uW0wM&frk~as7NRWvytN)N&sZrL) ze;|XG0a{s!;ykOf|4m00{>?O&p0_+$H^zeF)2Au|rGttQh=a2~wuGevNpV7FNwi`y ziLLKzJ=$V*c#D16qX^hYNkm5QYmUbUAAsy0u7)oJrO$4hm7)mWS^P>@y<ME`XpN=$ zv(K9(PU#2>GrIA1$>TmYad3P=`&8%|ky-B_T1sK@$1jp3yi~KMGxKjI=9E=j^|<Hb zv*Tv(E9a}B%*_3^R;KqT=L->pZ&wZf@vUWaVXg77>iAducGE!sdyu)B#ulVBW%uip zT1IwFONQht5z9tWQDR=++g8UxUQ;G<m#)pr@vkaeV?xb@vH=k*xtsmTzTyjIiV{*U zPbpdzGK&hN3{(@7Ign0BOk|#^BslFe`1<S#)k+LT^%YtSD<f1MCi&s(hEN)bpFXb# z{+Hk}ybfH@erV?xf8&-wYUvo`S|pH-t$4t--8d5)yrLAdOV-O5MUOey8z+%kbs~CQ zVxbx*Wqq+!NXlg4^Rti8?}p6{$rArH$Qd6UEt5`PL0;u|GiiQgNm>TfLv1`&%SXsz z6{V;N`pIks*P&7hfm#hYu#5kUu=n5YvfDR!RDw&o{jP6aTY;w^d0vSAAF4|JaR*6l z-p=RTlr55kxAQ8xv4%N11&f+N*0U$Be2|)wALsJSd%0YVdXm(CL4C)nnmFe0J2Ij$ zL7(x{tZ8Gx5wwID*5w|rn*xM4-}g1r&t6ZK@Ex1WWa3id)Iv@_Qb~LQM<|X<(MF<N zxg(N%J7QB>)TfDOtCC@hT=*fCHs||0$3>Iv!!%Ba7Iwf4gSnEx*x1<8uSLK%AE9E& zD-plj;f$PoT`q(e+Lvb;V>U-A+#U)osZz#EvDh2L=D3w=ou+-DH0Oru^SyPhphB{m z<YOpXkx*c7lc&T{zwgu3Cb979?m_HF>U*n*iT5GhE3C|FAF%GP%2^=CbAYl3WlsUl zP!80_!A8c*Lj*KD#CgR0&w2ccmC5ogN;wTANZ-O0#iD#RaMKO-8<HV0S9*mOFP~-* zgWLY><}DuwbKFn9Sgt0?$@w*Pq22ppqvuuay#+b{kxnD{tQEfAxkF~s^m}UQYl;l` znoehbFtn0)wdeI&p^Hv}fHgh=f>`~;F$I!5Dxt%7Zi$*LnvPejq5#0e(b?)1B9zwR z-!n$$(E2Cc92;t>3lF7_&(E2Y*{oGH!HAh?iYdB^Fs^7i=V$@&D%CG79+zW9OR=TC zXGnSJzmUd37Vsk5h>1Tr(~uNtS+07yZH}%(q6tgbtsXLCM|VC4mF+RB*`WfGOQtUr zGYg+QkH*HTn{sGhk%CgCXvy+>AzdW~c7P~;0yKQu4)|c3Yr0Y@j$GI(Lbe$Sy+}pG zW+pEK3mwgsJT=9RSZt(jNX#M^p*WT(X1OnEWMrgd!vS*U_q+yU=8=q`y9uZVW)hRw z)iDo%OK(y-eF(c*qNCb%0UVy-Xk=20K#hc?LXV=!!Pqn<fk=xZ3HCdy2t^3G>ay?P zv;qW4Yy>E6HY`sTYaDaY&|Z#Nesx*E=HQB9D{{i^_nMEByQ}+SBz6!V9Ay|UxyH+r zLYQT{&oDaXS=@}#*{%Q~rYCM{Pn!+y_I$BM&!}WUfaH%Uebgg)WI2|nWVC$cZ*(PL zW+v>6b_{Tv)vW3IpP^{EQyAh|tGVqR3LUA>F+$g)-pGk6e?ivSJQaQx0I3tH>X9GM zyxJFn1(LoDQnZ32kfwS+R$gA-`6RkhK9_l{%&E}-9DrC*DYZ5M2qg`KF8*c9q0WLL z6@gAgm9%tpUJdrqHVh$1meI-8(>QljQziCdz3ZO47=4F>YgySlAp3QRA$O0we@H#i zGyiz5v9NHVNT%FbFD9F(%27}@7E>;4v{?##y<e@{k2uNog;Kt7$gJCSu9&8JBNg;p zUCzz|hBn9rgN%c|cWP+3(Q0kyaGk-%jUQwf*X@@UucByTdU9?4t7R+QGlzp2m+om1 zwL2`l9{qZ0@NaoB<YXu(eH1^ITVmPIjtKRKSI=#C;YplCVr=YzVLm@-!kGMDhMd^w zqJ{R@r0=6Y1vzA`(vdb_g&J8q^}<GTq$=)n!NF4KTHRNcgCz>-7>qrHELc_eg&F9n zg&B}60{MTgq8$~-UK*&Ke>mO7iIYCud^X(p+e^iwVH&73gtt6$Otb)tGme&ng2!&T zUY|Sjdci|mJ3|on7IOB}K5jUV27;{o9x;ae_0avXiDuF$bec<2A9=h?bKYlFyqgVS z<O{7`BZE#^R7#Ge=jUi8JRt*zPfE0FEW-8D;IxcEm&6q~xB#07g5mM_t24be7BQ@b zLuy>m>BAc>miwvkd#${>L=ab|Kt;M35=gBG^@7FHmQDPJtf?)F_P0)g>X38wkB@k4 zIX^u+L&C$|L3mB7Z%;dr6IrFcF;G3=9pJj0VkeT&cLP!}?&9vcSzYrW>3mMRV@zX_ zmJ<284xj5{E^(^|ex^2N94U(iN)C<L#Jay7Os`YNg6TWUF0()57`->8G8!ahYt_6c zN?l&99Pl(AdLhmY^8fe>1*2|o<AwJrOwp&gw1ia7&rBC?kiLuV5UbK%I?A~R1!Lna z9|aICVa9ds)qI%>=16D!9QQJhH>8dxU@<6yD|Njx-^j9oViB##9u+AYx2p6P@prop zFnrq2;^>+YOClnoL6b#6RKyveVb_UmOU)l@y4qrwxPs{hr6exA@N1Zk%+hBB*AxaO zNe3<|@`1uRMh>eoDHt(WWp;OPuH%#{;1}f^qQs%zqK>jD_g=1hu@4tqLwH2?&4I$b zl91x<;jCHHsdzi173u(xP@Sp9a1PSTW{o%Wp_r|v8pT5zE6t%l817R`EGcg*`~Sz( zR|druEn(ssV1U7WaCZ$Z!3K9HxDzb6LvVM3I|=R(+}$O(dvN!i_g>ZR@{^*d;c{+w zpU+y$7<4wCNu3rP3gLn`e=$f}PNmYQ*m(bB%LwCrt+}C(1m6b?bPmy*M}Q;E&OwY+ zO*Gj5I3>it@A@3jcU1d-yrV)20E>r2Ep8Ire`*`*e`{N%RLNT1@WzC2teKA4&Z9bv zhS*k<{c1^(EqP=E3@?Ga4aa`DlU5F)2*i>^bYvlvTGhjI3|&y8m%$jQ9Y=q$lM)5C zElmFn0%<3M8`YX0k+Czb%cWZbN;3%_MA3)wXvxZ>z~|=Aywk+o=?OMN{5(AOMBy2* z6)K@5yNtmI<+*q!oRT=FKJUPU yqhhd&(9KY#lLR86Btz$U;(;9bw5V8?PZ#4-X z5&PFeeWYLx{A0E?zJ;3#8ii=5E~Txzm%@e5C5mqCpF0t@c{@h*6C&8FR)<Sny!7?g z;2W1}!_UD!T3Tb^InK}@xvFAj2l1GnB}-%}rb2B$EqK&9Cxr0W7tQN6U#D!i><uTF zs+STX_a1wF6towvK?+;Me+yeGXs(8-u8d_4_EBP*TLOE$`iLuyIIyUwD9q5%F!xs^ z_^#?ouA9X4=7eIQT}UC=eg;)CDKedaCoY!W?dm_}AILc;2~1ijti9bO4o~-|lxvD} zSQAl;(aQlWm|j-7Zxb&#(In94qWWRL2tI6y(IZmz$+JhO{&iPS5>AX7hu+c{uYCVQ zx48ESpUv3sg+W8uQv6CrCOr9PRs~Bt9rt)!S*dEh5?Ap^*U;0~goDpOc<!*QfB&Re zJ8owUu`)Qj+1eBvrHalABIpk2C?cOCzbUvNIzadi=(5#zhE8{gt<T`EJ8S{xM3#Lp z!HY(l-z3y7yBni)rQH@~sCw&Gka6oUl`sazhp#ih8|jgSOMUc<tM+jtQyl>zp+Hzc zdhjt{6Wm>3u3)4!1m5ZUFT8^Qx^W*XP(=$V<kZ7WF+YX-6=+;E?(YiBM@|?S|En># zzh&)dlXX9UYP#{G8-||bR8VftSnn1FjUd7-QV8Z2DnGB&j%i&Da=cIIco`#G-wWf4 zWarDrK9B8|9n+GCM@ik)MG%W_-aMEu%56-Nb4UedpmOL)H-lx^*n8)u!C7Xbv}DM{ zpafa#!zI7x<5GK@A%cA(%QAFl4np^9Rf_X5xg`JSG7|YD)&gA}+}4*=)iYTtwh8L* zZ_m<Gf7)`o53Awlx&fCMf6aZXRLMp_C%ZDyoz(hsHqfkCsyP7t-+LDZwI&09AUI1D zo8=z>v~PgJ^+G$Q^>93e+PtARh~|f{7THp<gNAHMTB@4a#*YA@kn^_>$uDx2#@};I zcc6_A#xsmK=e*d6Z%e{@@B+I2elmwV#Ql%IpD`}!M&06E0LT>aArr9p<N+Xv(287+ zNgGgIkQc-ymb?MtrVx`O=aMawD>I=B0z#J~2g#8G$YoF|;-M&F;3SZlb6i+mJ}#d9 z4xi4~GM3l>d8Kw=3U*0T)o+eZg1>*<q<);<!qA)#X@JxsZ9JjKidwz{TSfU&&Bj2K zYTx<I(Q_zw9g!3U*2x=79A2$KQDA;<7D80EjxuUPca9<Wmo7Z3nSuf5*dGsjFJ-j? z8da<uB-I1s9UdwKfy2|1=kx2sq!~bJdCLT9iAH<<DPAx~gg^MveZx7>`1Oxu$;z!D znOk%yp?J^<0ey<X@e$|Rs~F77kIvyw&_Z7zR`1pg&k`D34^_X3Vg%EB2%Tmv?_;0N zIpk$e<HY+EmqjC;oN>b9`S~S3_8*0xU+98dnqd!SpK$p5!=lZ74{R=e0zO=uY_4_O zHLQ}<4n*%@zfdiup2Q;a0%>55&GpN)ffwb<Y)Ic_<!_q|ejtJ}SxN2xYM`rgXSiM% zM#q+LXCYuTbh)(lAYIbu{w(ovcop^JkF<^)PmGvDO>ZH?ivrKp4PPa<ig=)a&b(q# z;S%eW+uG$)7<!{SVv-{(sY0GFxBEJtvUH7`IcOmdtwT$+=#;}f1p_f#Onw?52r3;F zD~D)VZKA--cu8;2)e<r$-U?+Kadu9U5+j;bID(}3WoA3_PYea0O;#Zp3~f+O!EX~V z`Liu!ZQU5j#k<akD<Cx^<KEk~%XVc`w1s^V^WY1?s;Tx$-6W|4x@qjj@UEjuJR+h$ zai@)acud+gs*az<6u<o7_-pdS*gHO<{h2S)6L$$}m9PK(Y&9jp&L3#ldMzcK+EWHS zsqBa<=2PnZa6VUr89{dG<}jQ~Ae;yKSwZ{T2xGf74UXy48mb)$-o1bDMX6I3{7InK zIEmM)k_7BL70YSfjgw^4@+@}7nnBL}0A9lnHDwpoxe|rLo2&an7i=WL0O@%h#_X2` zU<Nf*DIS}XNEINX#5`daq_$xG&L<!`RB(FNg*l_XChO((oNGRs905>sP`@+kRbEH* zb1US$q+Nx7X;1TjI8ObyxGemqHBT5GupbgSKx*T*om{?c`SD_A@&tY2g97!f3PjYD zJesY3Pjv^jN1L@>q2yVGeAQQYcII17bMQekVqfX$n9q+p8g1QiNVZSVo!K=G38O5T z6St+GgqHTH<x19;xFio^1E3;%L%#w|(>J_i5b=aQUT^B@N!B2KZ5b@5rv7m+N%nGF zP8gLxp=<x@QB}s1t>+@CV@upl%y23=9|fi=bnoUNF5@IFGC|F1!?9dTSW((z&Pv=% z*o|()TmsVuLo!@k9^d&g6{A%beS8x7Ols+XoCcBP4k}xFy#Q||ZM@2O62RU}DZ{x8 zhF-b-ssJ?`yW({&gbi5lpOEd@!k+n8_}tD)30FeTA<v@7YP{o@^+XAmWUaF|e@jLN zfXWHA_>*!5lfU=rarT*v>m7`J(^iZmhNf%|sYz4$KV0>iQ19KUYavjC&-*n>0;wip z;7PZhVv21Pc&IRaQz`TIq9pNR!%vUD!!*pmsLA*Fc6k&-)3;}Ji#~QG?awjHl@+Sn zF5Yg>;fTN1MCSV5)0VigA*?44+yM?!hNBY;o$5Wd-g42eKmVw=Bg4R08KwXYk-TOa zIRU@@8=P3%Y||<JoY~sK#zG%L=`)@!?ODIqTJ4=9eYwbM_$n8^svE%^psS>G#e1v> zaWN^3mkb)S+Y{jn)CIrG!Ms>zhw|I=&Eb?<rY~2)HS+f_?3O|7uTRSaKH)zK^>kNr z73EZWwdp?cRN~4P-dL4hDVjqVUtg4ogMx#1oqVE1;f|`W$p68U;=;^94kyun4<`%O ze-5Y1;6Dvf-=dpT@x65^pqLY0gTD_&?v|?)-X)DkmBw$)iOcn7;?<VB$9I+^UkpMn zNZt%PVqlyo>bprblTG_StPJ9T*S=Ufi%q$dWLA~Zv>JzO38N<am4@XXzLN;Strgn@ z1_>ht)1<_}bd2(Lf{rJ{syovd!dxSB+XAB3O<O2>N8i8^JIN;~sZm^)iA_JW1W)O@ zuoeBn!FLvews9_iYN(x$uoM!jcb+Y0<QV>9`%SF#h&Db^RU<J5Hs!KVfSv!nQueP_ zlJmZHh@f3s)hty?9dSC53}gd^nrkU9CKhNRN5q}88J|9=9B2!9ezd_~iE%!V9yGN& z&f?CNO=AWpRbu>#4jRN^6Lf}`Hf;Qi!1=<qixoiPO<gv`Xsm-5U2A7Z<0Iqi2#9!b zI*41TLkfM0iqt13zMvfgeLnC?gG~{w^twBNw`dh*YYj?a)S{WM)O(16xWK>bx49mx z&)XP|IQO(YFqD1$V*vRE-TU<kwC*VwZ3ZvngkV2eoC`psm&#I{Tmhi_o&<>#d9#PR z=~h-e<wbb!o+Dp*WTP9qC?@`BX8Q0Z@MOv-fl-rW>G<}J0D5c3D6qgR=*cdBdD8R$ z@N2abafJs42r2{cEFU+}^Bu0_zk0Cf@E1q8wwled{EJ)E`kk7doa_pAR9NeB0_Nv% zS<gC4Pu}q8=Sp3&2t}IK88}+EPh48&sEG4VypAQ)x0P?*pREpkQ}~8dTD5v<sY`-; zDcp^>96~CfGEGUv7KTpJHXTTJpdD-S-KPMeFZBKr%~$=9$(NPn^s9u7t5o(qUBIwb zUqHO^C+&v6QF74ViM8Qjed0~U8scH$&KMn^m$JFNl3}K&|2<(y5(;xvmLt?JC{k!N z2(`+NQ%G_FD#&jkDq4S+j5&w_<5jp-&s7=xi2+5@^Tw9Tvxw5`K|tG|k%#C6CXWL; zTf=yjS#syIp?*&OFf+O!HQY3YpXHH06}W-(qC50ounCbBQo1X`tyUPG9;X+4V9^*I z#ldL&L-jo_V(7z0ookmJ+0t^J#8Z7aQ8EO@rlBd=-0ypJDRVeK-W4$$+*mt3LTp&j z3exM9<M^YRo!YJ>(cJQA@x#ZdBtH<enUXCQaVoT&4Dj5KudAEe+x;xkq5;QWOWDgN z9tx-#7=E!#x}<8>1AHiD17bGgM4DoMejY6iG`@Vo!yR*-&)N_m1Qy{(jDM?ADMk$F z^RZ`%sx5^Kq$pD^w6?#@yiaubS3YoTbcnn?vXalARPs1I2n5kz14>)m6%sr@w*EU2 z$p${f*D#3-nE5B5h7<;;N`WGm<<0x`;ML(sL2|nmq@a*3oHD`^rrt@UMsv$^oct&9 z8o#$^*K*t&NLrDOnhh0QC2mGp*oz2AZ5~PWjUXZyX1@9>QQD=nU*{dFpX%=5A|2U% zGxRvT*p?8z!Ff3ai!a0$_ZO=%Jb~?RZ0MyfBI`(m6}ol5O5U$-7zQYer?Je9LeUgZ zb$FwuEeT`hU-6)~*Ip|gzGBr%lbhewpEix4H#UnjD=SpSW!k$gTE^Z3xOCYL_tR3w z=tNrJRw=Qn%_3aTsr+8-OE%ile<~H&(^wql&OGz`s5tVu7@*s&mfQvh^xw^_)6js# z2_j@HPPY3pMY@8P{7|u-f4xy%jL2&IDuVa&@EHK9Y2VE6+iH*uwHw-!{6)9NCd~_H zImFMo2nzD=#dASk;1NFFW@lg7{1|2h+`iDi2|4DRpB_QNY$~apEnB^sItdpp;H8S` z1$@8VTJ8Vp@q95q)1{6Ul7k&fDLk0eFwO)H_~5E1tx3p5c`u@a6Les{c|l`8>=q6C z{543GBJ=d9P<2c0Ld&chzD=`9V`xi2>5jUBBh@XJy((rd$k;OHf?f}uRtLZPoy?b0 zM%N++VBA@d1*?7msl}RDG}|6F)KE=k3lvv~3z03@Cqflw3d?Dj_}d0ta;6aZ3z}oh z?d|O~g>O;YR-z@OM=Gw1`c*?G1_y*46=)%k&+fRB9@@umuv0RVtc)$W&l<Ah0!)dj z>(i%At!2U+Z9O@bm{gM{NSjze`Chfs82bVRx()Ueivm$B`)1)vk<JH}=5OVcMZ?BE z>|dJSOYU5vZC^>u$>ArFz;%3?8?}Yo-*19~-K4Ansn|vg0tln~hevZ^VD9}j^To*~ zr@5e&3W&E+E8XIq_@*8%wa2e>98UkT$6l$&aG0AQ=r9dg*(fo=Q6i61MS@xMEa*7O zS94;M3?-I$gQZF|En79SIfRI(8w*SFD<^(3BC{DOi8VZYl6<7dnqv<C6n68b_)>5) zyE5lO_L(6X!Qd%t9@g*VKa<)6#{VXYNE2tF?Xw?f_5Z2pXXf&ECqw-9AVR;kD?EFb zODc~~Z~-TInM7BJaW&dkNi?n=NckgzR7(~?gOG7@-E9UQX0$*3YIjI&^cw>{v@o>e z=y$pu)N#WEdO~8Bp+Ny^mU>qGn(sLnQqmj?*D8^TG?VdAg<Kr>czwm7+^hEUABnVf zPv&E(3n9ZR4r={5pYg^VJ}Cw2I)_#L7-oVMWcPkq7Jr@5#YnP5k$L=neIFaSTN|LI zO-v|N@x$ftjyH?uBLi(pNh~@mxLZAH-W7@BcQ}clw);AShRc1t?8DH?Tdb9N<BS&W zkcm<I&p#%I5dFKj2Y)*P>i7U4T}=bVg?CC)Y1e{Qu%FhUT!qs4%eQYzuD+Y<)e;xg zkdO<P(2>_vW<8$Om4~wCjkYgMIwp#Kf+7Ir+ovEBP|}I{@iVELsM!Wg!Bdp#zuN#& z3@r6k*;K_FC&xL@a5$BK%&4-bsEP^!56`omaScFP7}q{Hn3s<7Tm#Zsjs92nZ{q}> zQ}wDh(m{Ag(uhQURaxGzSN~AmPA#^|`GiUhJLWv^PIKG!yWUlN{*tt=<n?_0c4WAv z8;pRMKqI%O?!k@w6K5af#?Qd?MZwF{vm*p(L<7o<j5rm8J3WUWM0G8v+FY?6)x0nG z$#oPQw#XPVMQ%Dk99CvUG+HX-vib~~-^yND1!;ToEyza0$&3B(2RBac(A6@*h&~u( z!mkYdG9?`^A+3H&Rbn_K<0)oJ${NABypjq*8@iY7MM$TYO*o8`#UoPuva&n_uK3cG zor5p>G!~i?#)(_9a}h!B_an5LPKiyXVdpDkK9@Z<T*G@-5q0P!0!(?fMp?EEU+?X( z(iREFYN0S&OpwQ)%DA(}VUlL2PPL6yxvrQL_;&{BgZd8^l7SPAe{-w_L|migA%YN| z757=kt^|ABv!DG1cRr|MxiB1`BrQPwjeQB=gIKeErMpJ;6Vavg@iW<)**9?vd@2Sl zSgbB}4za_cwB`rRCgv0PjUDd=GLOL7+do#3<1ZNh9TgS1UOd|R>dK0G2VT%bp->fA zXVQzR9N`yw=F-X5DEr`pMkRkj_^-)bi(K+cc8?Uw?S+8m7GW<Cwzy(A*hu+jLHbiN zgR~l~-vOI9MDZV_@<&>ecfRKZr*u%(L*+TWz+2qMN!^ml3Ij&ROfxPn&L0+yJT&$n zUc$&lYt;=rv%k)=18S1-%5i0|ZjbFhEuwpeP?Q<k#j3{EivbyG#QH`IsY)K_%&ZVi zn}_q!1^Kl1P220q+~%C5g^A4Lswj>?d8>lYWq0+<^_JDNVqf^2bFQ73T%PZ7mcn8) zKo00ehqrrX<{%qel~p<^Ha9pL)VkNRRqiD@ArEEbL(?8zAX-D~hZNr-RlKFseEQ@A zat^jXu33NoQRhAIUOGm{wNfUFro80sFIGk~|B6xfMU<g@Zz`xcxvDgrAq8cc{R}?r z(T{9rw?0s9KQ|!tRYzBft8M0@AUGXV5BI-rjXVar_-HU!k^Pa|X|&g1;pT~zFv?pm zSjt2OufPi#stFA?+%Y}X!1~h4%F_<?3L6Rw1GWi&H0)o6llRX~!@cTXlW47bqt}aa z`0~g9&4m@D6`aqm7?AYsP{v9VkWz0;fuLTty!P_9V4aNeuL{u$TtnNd)`|I@_wx4G z88ZWYLWWaIJt^RGi<{9af|_cq$m*#W7$z6}l!*EAK7FNpZnabH#D)pZ^iBNZriS9| z!fn)?ZzS?N|4A_!`=~O;qH4EB{kimcll|-AC$e2_zz=VVp&S=rmuEfJY*_+#8`B9K zp7NPh8Vh>%sjqgJz>zHeWDNz|2b3qd<%#-4<25^i8$&;Xw8Y2B<N4<1w-18jV@)*= z3ShJ7JX!x!Q>45@k(?BA&<vq#2kxb9CFff4`Ss`TlAMmU)9s_>E?!;9!lf&ESXm=m zz&vk*@pFuuJA*!hOA;~iva#Hx{ce$O`o+Zk|M!+oJyKUj)t5@-7<{NcM?X5;nemGz z?>P@X^47m&3+AgijHsk!zt}kqh6X?OH(=Z!jl0716ZS|AQHeUs5$!t3o_4E0!{Q1@ zzsl&~C-4?a(vVfnY?(ub&~v`Ne~yy34dX@y#P*{hoQWEP!Q(?^VN83CBrv4;hn~*y z={eVQHQ|ZqU+dkTZ;#14xY;-bb3gmj<_WVg@X{VC>Ahqx|4zH}RoZPhQxg3B`WOH$ zXJ}D!V2f_QQUpHC>_`rQTW?I@==}B0J6^(S1Mw9DrTc#b2C`Ac@dymUes*W5cgg~G z5RbiQqqewyWsZ<QGW4#75ZWzn*;h<#$3kj*_gX5!^vL5d6l@X_ZcF<7CUD4xwi!Jj z;gY^6JR(x*f1@djE~F#%e`VKHi`arMn4d2)0%6gXv;*G?Qj-c!y)6DcmwkygJS?*i z254$6lZTD$y0CYu7HjxStOVnQx+MTI_^MRVO(Ms!0{(+E<tCcwsLy__H{I@{qfu9f zGQYLx4kpz1z9sD3s-`F?Pv?_{px@MW4Oh3fpAH?a@3(L+;Yvq*T{C?f#wUax1s2w7 zzDiukmzU2!n&2U`QIo|E<1Y-3?1Y}ol%XpWMbp<&Mvd7D3WXIq1@|E4=jUe+YyVvO zjz9lyaf#FyCzfY3I@4D`kM=S=t7i(KGi+Nh9R)LOobAsz(P-{qDXO~mGR~;thuyEV zB}2|_CwI9M!eLbR9Q=N3qIXRV4-dy=O_Ggb;AM-2U0z<shlGS=14^u2Ss)7DDdt=e z5#LcFN4oqhLRt!NuUs;Vvd|7o=4yvk!o(ZR+4}Ejzc;s;H8TaZp_;PTL!?HlYfKnU zGP6Q`a*Q2y+ljxSsK@z|osJbj+`gN_eWnCqyLm76q*0QdW|pKc+9eHEUt0#*+sUw+ zfSx5`Fp+{5^7Uh9SefI@xIaUg_^M^~5B<B4K}@MBK5>3<hEQ&12@$&t`{a<T74&~w zEIoay6g3Gw^9us%&`Dka69@O~fnOev6&61JUreKo=E*;zRzhXn@B1imML(f}L}@!o zrvfrFt(|?Hj^TZ_k)L1|H_X$fzdGTB4r>G4W0Y`rLZx7&QmrCKWlw%-MvM`Oj>F<L z7FFssm$a?B8g^-Sr`>^;+)-@tTh%&<%b=ifiyr>Nx>-7z@xhR+Bj5jXdH8`Q7oW^P zo8Y?ha^coI^##rTc8H7-yXBqj{<NCJE3S-s!UB_c9Hw5#64!9|{Pm%SmT7G$s7yG8 zAv<eCL5qDY9eZrXT#g;P6pwXr-I6BO)6=YI7Il|CthPHV=0PsVi?P~8U*A$AQt-DP zuR&l&2x9UTlTV?JvBO}8ggH*@D<N}Y9I?vYk6i7ML)CCSb|b-|LD)IxWcP6H#D2o- zd9u%~g*#3CBoQOHV<@C;SNZfDWW-s9-TiOL5K8X_dW7rU-daDO*F)QK%}hrbT77$d z`Ub)^pQDZu4D>qz@)@r0?hJ;l555+ls~L3WCD42zQjSk_{9S7&VTw0Vc_{rmrU`B* zxWAnnWy94{S<`4w9YB>ziFHzEjG~k`d4FKmxDESkA6Ag#99Q`UA~7_+bwR2D=M>Cl z!!IRulX={{RKig{sCiXOQ@IH(giofkn>KbFn&2I1p(eB<1W}W~3|qZ$?+)oTXOB<t zSbBPU4+q83&<HSZ(;bLjvg>JZA0kFuK3zE0g+FD%PvFlpAK05f<4w(&tFU?`jvwGL z&#qtSxCZ;5oc@M$YS2X*Kl>+}Q4y(%?MY5Ve=QFv)3wbRe97sZS!+!+E>3<4bl+eX z#1Ij^{Nc2x&GnkKLr3N2gX*1Mua4Sko%F8jLww`m$G?D@i(9`x&~Xwp1<s)7k)(+& zPyYXU;>^?D)7_~wZtIrBu2^tPBILJ<n&M^%*A1J^7b9<U8_iq(2nmS}yt-a@VF4Io zpd4|%*MIH%G4|~p0H_VQR?l_!pm8-&)`+G^ncH)|RZag{X!jK$Ip+6gfSR+zQmI&E zwy$T64+Ph1QU3Fpi`Pimmq;o_^4Db(!))L2HGNIh(}-0r-{Ke~qWK+q)o%_mMdD{g zL7*b~$tYs$2b`c4(L4*4Qf~+tY$+PP;k1|a8fkT4{kebV9v&5V?Mn4ksxf+O(`zi> zC5|Rff_Q+K0?ol{uuH8`jQNt!svo8VhgaP#4Ea%8lKA3Pu^5%uZG6XC;+8Fb5Tu47 zDN(tkKW?(wOMhZkj?R%gK>a3UyU65Q8|}42;Mn<eF<6Li$eKg>d+7OrS1?sM*;1J* z{e+j-r4OScOt5na39IdvR1E~O%UN1)VcBz(twNM8wxU^Nmnw;n|KB|{+<fMBG=+?l zIVuv7i2A|OX%23F9y`N%MgP}S1w<(B`F!nf4GGwz;=;jRB~OW(d+K+_C7|S&yCd~T zkowV8);AAMy5SCJ^+e-}&2p8|=a9X+5;uO=P#XSm0wnpImqC~|KwX##Eo9ssaAAi$ zPNR@1=|bqukREb=Bg7+VfsrzNOUY;Q8*kt>;gu%Ac`EP2o%;$<aJ`3zCG&>{T=R0r z<()g#r#+3=0Lv^2tDH^w7grevE@0^CO=L=Cy-C8yx2wI|<?l^#GS8b{8(IRt|6jI! z`6g7|fgrGJg5@<gEeiqy!}4c$LX>e&5&=5w@|m!oeRRDH;CUrg>H7faA3=UV-q(tF zLO+h;gm}2OH_-bT9Qf9SKhvfk1?fGepy|dj+9DHl)6;(p4_=F-OqF88Gg8|N!iYvp z*a-*gY{8-;ZLxN1Az+XS5^`sL^nN<|bIbxTeV~NQ=Dh42*PqLAFI+B#k&-Km<_8~7 zymR4b3S4Pt9t(hBo)TfIky#;F$Y8lRy`NvuU6%MdkK9yupRv`HDKn~@MaE58ZPr?@ zcwak*uD!{cAn3Su<rxJB>n~7-QZSUV`Tl6i)ae=GcTpYn-AH@Cndhf>lFyG91Ev_N zC+f`%FS4p{+vP2Mf0oh(J)N1PYFd{|qs-=jP)yyZ$#~{~JdQF1n_yaPyCz+u4zGq5 z;eIlvL2?eX2Y*~sG0H7pfuGEo{Zy2Zk%>Z6p5bGkDiTmJ%e{rJ>K)11Xun0k>tLv4 zW@b9XUj3cmibqK(zk7}<lO_!_O&?yA&@H%Za`o3K{ghy3&@z4Sq!8lqW+>m##+PX= zZ?>Z?#~%&>_M!fJ4gVKy;zi<l3kmdNd93kJc9JpUIPsk6?ya_kGxU8}cSp=$tO1Ct zv46!MFJ`&CxzX!}k&t9t4NB7%yfh;!pFy1V)RMleHPm(roh1nWbW%{_TFu0jC@IZe zI#@k6BQJz)t7vJ#YG~Io1%bGjcUR_=+2S|7d7~L(=2nCu{65=~+34|gW;Yq;41w26 z28a3Ry|C|d#q9HPf<)N6D4KCC{DC7NokCF2g{Cx<VzQia^_=@M-YG)iRN6(zm1r8Q z5v-^Am+NX%NDRJ{_t$mPR6wc|G_;GO{LCI$50oU^CsaegUHm(fr?I5@;j7^Ta#7RV z%u#aR&a8~_WN3HM4O=#4>wJKuS}t9Xthh$eh1&5&8LUQiE%A&xR(gGFj}2aF^TLF{ zwYGX2FGkm8P@?_r<Gln#A-&v#CVDuKSu)J!1pVKq2_7r>_=Fw-e)~es`tDpo9daPg z+A)v4vklHf4OxEcg&@<DGx|0F=pVsTMMwupOl-_~?Q2JUhx%tF%FqctAv=-9WQB#5 z8{u>6l&_O@-9<-#(I$?l3+0jy+aGea->1*siKN*i#Qdi-2Hc1|bZowu@;a>HPt8m^ zY;ogeG>SpV!PPYw2Fo~6_>snhpP0wUoa9#K`tmr;y*>>HSayjUqln02rEhMxBd_KK z_w;<P#wI3i&lqS*Ke|Pcu6Yxa@+?T|mxz9Sp&|XJdv2C5yA;!Juri*}P~8QOmF_~$ zHdNcKPgk&YErC*c>EZ!4bzZ8HI~k0TW02#TG$}ef+328ueo705=0&DXY%UgPoCr_$ zq>1wZhhUV=uvd`^udTE<rCiuzbv*e?7(f^%GVp&p`d?EW*D8l~r6~PNu`6;l{RDbm z7;B`7(x<3;l1!rSl)hDhN8LAU358nQS)jLMN^RzD>@P4$DoQt3U;4%gqrvKtecF#$ zs=%bqJ?lF8_Fsi*bCKO1;-hZtB<2@}>%RGOf<G;_AqyXZV)M8?iXrdwNAW`&yXj!X zR!KU`mIiJ{-1c^H^@i<Ym4Ua%L?*Yat_Tsg&HVG1{ZadOWSG^Nn?}rP0t#gCHmz*J z=ek$^4(DQn)7RIVsbdd}dQtqx=%2os;Sq|?5pYec)P<AeGMXk+0ZZTE;3Sjy3T<WR zZQ2L-&BzCYe*1c}8_s7Q7-9qR2;jR$G0&M4UDOz|B*~;Ej$eHdHMcK-QpGMFiE}H? zNwIBI!O=6vV&QPsJ0ZxKLoHu>f<WO$SC^9eWK^!}AdaUM{-FQ7&L5Bz_U<LuGw(G8 z0LUd?2f)r_A+BsH9;0#-L&2>v%~e*;f@Z^@p@t@<SEX8Pg~t8EMn2`2W}y`KkLeSp zgO?5a4M)$eJK@u_KM%qYE{RD;!nsoq*ohrWhTdns_fOd$Ru53PlS2$T$s&I#IATId zM20-#Kfwv)hm7l*scs5iK-Fsg6=gpYd49O~2n6UjI5@D@Y<z`T`jZgd!-bVj1Df-O zggH2ckF=M+#fL83B804SQ|5nrz1&8uO|LgI5RZ{(m&bwp+zm~od<HfRek2yv;Tj&n zUIlb!uS*>4OdfhKqH4^2D3k$MYy+EXR|p4r>`=4N*J3=q*68BlkNd^D{(O3cRzZKN z{msF?wtBXN$c0vXph#+V9PaEjO?_!-3b<lDh11fRi&R{eXSU$w)6w9A3{LU8B|Em0 z$(|V9l(V#bQ)ozq@xOxgAXWM22<&B4ayF@D%F1q__<1{st=<Ybt+tJzK@^3Zz6lTh zTm-7_;>|i$p{<H4eSM*I(=1AzLT2Fk;&lD<3l-CNsgqLf^HHI{5+N^3VRbVkQJ$u@ zSQf86X>C7s*oUju8w&N%G=*uD>>nV*LVPYRwbf>0W1|=6RWgQ<JLk*H6(ay?*&oMP z<<H*9Nt<LiijmRgi2k7=xgxy=_9vfcI7nU>c1WHcm`R|g{c1am%t3M~?+l`2Gh$i0 z0%YoZG1-oxSF6jlw7B~G3ujIfp(@b)I5VPKeDjiWoGKRz%QII>jZHI{`#vO)uVd(1 zcLZpvAgZpastPNCnVj?fTU5z4O7+W*02hmnb00_Y>PSQx76m-EvY|B9)3~wIbMs&5 zBW#-Ha5wpjX$tAx?Pq<U`Oj{uDyggrh2K<Ott$Z9m4}~*bba^}ZFy(UTU!el&$buK zrp-0USs~#+gBk<6=UayF;1BGJ@gA#Z?G>HqBwGQ8L27ajWn=^;PN?ZL=_(D8zEX(^ zGWeN9fJ^V1sIeiwkA`hwVZ6l~0TOc#>~s54s&{SB%}oC}G*ecbMjf61w~*m^%*^xd zG~6}@pt`+WGTqE+<C5X(k2;R#DSpv@9Ch-NdS%{Q%CG#e<z<2NaEs)AfW{pGE!9zA z3Kw=d0((YI9#i<<a}#_=Pfc=iGJLqs_~I_;#3fei1Pdi<%`#f&kHdi?R}F)?VlK<z z0{qewFD4msN$_^X)!$^8`dNrjRuBK|WdP4e-vYr4>j=g~6sW(S6n)>bvI)sT(3N+{ zqjq!tsamXw8rXn`%$in<yJ8RNl5Y2l#q#x~6j`Fliy6Pn6ubJa6^FEyu3_p+r(DqN zW+2(gmAwU(4HVHT&$DTX4$gLZ{kSJD3axk`4^~&N!By|Uk)k61h~1SjZ|%5RJ5&BR z+bfpJkX#}pFn6jdio#W>q*<r`lx_|T1#UG1{JUe&XZ{H25-S{We<#rZB(c9RJb}$0 zDs8vNz8}^UZgR+!PL<9P)EubL-!Y80eJT>Ao`MS{OB5G9Wn;0CKvuP$#IAuFF8vU? zQYptJAditQ3!+X}x^jV-^^XB_y%V3+MhebR1yqd};Be?r+RZ-YFb9^W8C2;;i*YfV z3K=h~Ia1-H+ikpxvW=S?^um@<8;)0|FJ<yfOoEkofwW~m+q7;?5qHImiZv|Y=;H2H zOd2jAxVUh_@>1koJ&ibE)|oOMqO-n($F8IaNcK>g8t8>-hHr=6kk-mokP+7jOU)cR zM!_|a(`GTU_^_{41ifDaq*7|I3`_X1U=C(;gm@02-8;&jS_#4M!J>K_sQGe2>Ui{E zQdKJi<pv%ErA0<vSD)wiH+ND^eLuHXkNYf#byBlACAGTa=7>gP+$VPgn46r~>16iI zrJguosz{)bxwondW|<}@;3d4^>Vfv<nRKs>1s1hu43I5thDhb)>t~#G1q*E(0Y*hM z{!eL@o{F@-qjeXA#Ey$L6QQkpZja|VjGKk!l$EzafyhZ<nE$W^I}0qAjd<ik*a3_c z{8<>@NRxuk;qYkm2z~g{hvm4sczee7clyQe<ltKT=1$1g5T!4@U-@lJBSCBO>6gLy z{0auWS$gc%;9LiqUo2l)s)KcaE7|;AG5{XPZV$TF%D_7(p@Czl$ip8I5fKV5l!?=y z--J<eP<uoOsDi~{+dC{#QL~OLsQe-&ffPJaSdlf~SYR0gDtzhTQo@OV4olTm9qnHN zZ(*elG;}2=!lR;A6||l_bhCbbV;+Tauf&-{&nFo6-gcV)?X)|X)FtG0@;!DbW^s)d zQ<Y-p`#=XrR6fjUvi3Y{2H2FGEn>wv*(7O=Tqm-nfgUQQc2ZLE760Pk9yjUOg01`? zKbwuI%&-GHuMUKfiBCb)1k0PfyZx`M#{b)elvZb^Slo>6vg#(X^}__n7&(o6OTuQ{ zJ!Wlfy_xG4ur;Pc&hi0CgE2)wiT+61;lPQ8R|@V5Mh9z`SGElKnv?Vsqk;?O>etXb z=xp`OA*x5;gipzp(g-L$aVA%8DiCO9ynW)UYQKTP*MHz}%=KL}vsW!hqLTYe+bza4 zS%Vp_FCS7M&W%BBC8iwfgK$Ft51+^77V3^8zxhZa^-GEqmROwS+iM9DV;UeDRHD?< zDW#t)<A()v;#-q#9)Yf+pOd&KU)*@$xFs-rw7PU47)vVj{O8+lkf$+K<w|2QjvBL0 z$XV1LmQX&yr(Z4;*xip0Ja&(xi-IDbUFlvXMv@?;flaOF@{k{^DREaSdoq6mU}Iq$ zz4iBY4!=0lhCeOeBho;zu8SMgfOj?Q_V&-}QyrOXat^W`FI`YQ;D5)n)GR^TVt;hx zf%ww6W>_zQetQ0av8rWiEa<wy=1cHm9oY@Uql#Am^}z3H*BT`d=8uB<RSZt3-m&ZL zeicu*!HS~7-UaBC0rwopl83|WzR*%)4t(%n-^wHkM855nZUA)>?~pGML?CT5kWJzG zfdZwHH!<05Gi;U%^tIVzV(TGJGgP3)jTp+&FCY;HBqr0%l(RNC`V}=g-v`0<5HIl= za3AK(z^0VN=M=+UI~7b39caf}3C=hSiP8!$`h<%jGGPY9{OR?{2SKrpdWYZPchllH zj6W1b9ml~`Gk+Q;<(F{MeU83&`VArt4aO266-HE(9~%<1il8P8ei*BS{L)00=hN0o z<A*S^n5XMf)xQT!$5wRBv5asroL*-X@7giVjNZ&Y68E)Q0%C8sUA76dELN1ARoYi3 z7`)BgcG3bFJ1hwgT1+1L0SF|v#i6&jElJWab66Sv2LDLAd!MXOtGjUUwI>;L_B@VO zNKjTrz*^GbVAzLN8&6aO%siTK=vp7;2MiPx!tBh<HiFdOzc&R$fdVMWpW1#Xa(Dw= zS_9bAl6vr^^aq1^@)9bJk%E$W0v!&n(hjU7WL%lF#fC>aZK%F$78MPzq=RTwnl8P! z?mNDoFqGcOj?i^EzqXvS$1}fgLN<!6DyiIo@IUP&o80u!rFSlYlk*J@DE2%5jcYGH zx7L~zY(4SN5K{*mws&PRp&;SYudl;gT`&T(I1&zFLW=>LB3AwT%%IBFwC@)@*+Y>1 z9FnQ8iZw9cEq%hkZ_?WA8uK~#-D#&JQc)IUDDN5j`uZFAyBE0>CW=z*eu3dL22heM z0pWjujZC3ZS|LE<x!c+gsqaCfw1H0s#d0D=&yT}J<?3t*pSE?IOpz27l+gt5?Th6Y z$-+kkfoX}$etx#rvN4D=K2tByyz*wkZGVLVLT_eh_`C1e4|8BiQE)@(oE+^%6#(Kr zs<%0?k|hxub5XSs`L6j(n*A19G?Aaq7Asd73qGOOMOMq#rY&hIy7>U7-aR4S)Ld@c zMLT9@x^iPsC7y&R40Bo~J*mwsHz~-J^{mG5*IygOt%%1(o{Wr)WiOw&SP9c32E3~u zWP2s+IZdZ1x)_8ch)Z(G$DSBWhXQvyZZS|jkc6`%`X)|FVc~b~{Z2bXlX$v&;tz); z$b}_aZIng`enl;OUKfv8P`gdVF<{eEk%L0&68kZf<%&oEw{}&u$+6w%x#3tJmsyvp zqV8vp56T8Q*bhaWbF|hTQ#!AGN)Cn$Aim_uiIPvDpSFA#iKB-&cJ3Z7{gzgv9r;%s zGnhe_JrWRP8Z8le`TlVg<9Xz#*{WPV%N;r9X-s$#K_yHiFw<+GHTZO`KV}pe?Q@p{ z8=H%L@cdG}la`O3M@pv@cyS-Uw1AZn3iSuE=9}=ZVL^>r6F0$ehc%aOiJl<9a-fvG z{h?KYZ1h_DY$z!Lh<=t(rNM1yEr-Fd{c$)wJv{>=3Dn_-<PdUc<8V&HKJ&nRz3+oU zoy%R>r1;K)+Q`hQY<rDOb!8VK<v_68X_#kB2%kvj=M{wBfF(6&q8?Gu`e|xxo$h|N z2UO2@`F=gdsut5fIrQ1xLW%8r$|^XrrSLYcGh}D!%TPHaa)i)e-^J6>4bIJ~74?hi zECHtqg8Ro*16CYNhbNh2A>Mmz781BWcw?v$TXVi#(2ut<{>3{#AGS!Qt&pl{4NfPd zKvnh&G+`~X<WX@f$-dLXqJ)`9H9RXr)%9q%kwx-nRWlU6l4#Qh1TjGiSltwF6}wn? zHO4<JOqrT(x7!{2W0t-Q67WSW$xdZYRvm&W`Mt~Kot+#J4h;C!{RZFn@pi0~gA1yd zLbJ(`B()a`ywTI86}6SyZm}W_pQwevx@F7>Pe&&Iog-Gov6A=0JYaQsp98+SW`hd= zGVp55b{tZk-yRCJZIBp7GJQQMIEts|+oriqIFoNYK5^we#=siT31E4yCkY7&(e@7x zvdAeaZooy&41KH5S*bOZMqtueQqB`ReZ1OjR&l&yX~8%LDPEx4Xm5xf+%!VS2g8ts zvr~wE(kDoJ+8*K)_&uFq$-_*e)-AcAH)2dDvgPGOTI;&KxQl2YHQ;<)Df`dKDWsbV zGlK<){SMH!oX-8MhXJ(z0IpV;4nA<yq%St|14Y&B*+|AE^*2F<rrj%aJLJuQDf)fy zdGO5EUt35E2fkZuWfC8}@FBbI-#J3@zGAEm0&l}UvUbdvj)rVC!j|ZO$y#kH18R># zW2bL0>6x&GaP<k=4bsGOW<rvbCzZzZhjYOf_0HY$eox&^Qp7PWtl>XZYdVN8mcH0{ zpx`5X2|h%G=!F6RBC%|dL{BM#lrnMQ!ycUE#{OKHc$e~nz^dhM)447&g&OT84biY7 zIYzS7Jp5o}>?3r9V3jRTMBgvqd~Uvs2PHwPf-{ki_@%}F@7<VhQ2LEbS{hn}C-RCI zTq>3rYR^?r9&B=qtoBU%XTv3+%m=@A#Fk|I)F9=2z{>DJdpZL5m#|<Qg)!eiFFqS- z_K%X5KYzLca_0cM{G1@r4mB%lH6A@xNx+CP*1!NC8zGg8F9PH8FD2)gm2cUEm!uWF zPw-BihTTG}{nYHX%E~s7tiQVnI3;!Ud4aZ?Lp3Ue9nUxz%ARk*-sXIsU+hpZoBNLN zk&%%>sn@y2q|FyFC-Q+|9y3gOpjwy1$=`Gc2(uF;!O>44khcVB<^>{VHtae596jU+ z$8aN%0w0*AWIvs_dYi9}4~eT%UfkiIrxdd0A)|Wrkz+hynk*$5P$HK~ur-VT=V>)B zhS>tpe+7B9m#zkRR7rrQIH`Nqeea+1@|bBEHD9q(S4D{FS*kUwUdIk`7<}q^*uJrb z?1KhO*+>P)Y!kXY2ZHbo>tz7H+7IOzF8FyBI;Nj}Hoxy{2NU1&vnM~ulWmAF^EacI z7+%Y_QFD;}uj`hP6%aNnh3u%>Lz_HZIyfent_1P`?a}Dj`oswK^;*V+5RnS?qZ9`1 zoJsv6-RePQqqasETnDeK0kooc)7iaj(4X>@3BZ^F>R*xY`!OQl*E;dw%`K~xY-2aW zhoTB<Dxk!Hc-gdc8EHF->20G43AvCSy!GV+ew7KDRi!&-tM%ak@tZWt@qJNni2vz$ z<8g3J(3sayK=C66%D9v<$RK>edL3EHst+=y$9PVr7DpB70k2wE4<*e&wr}Be{xH>F zffr;Vkyu9bTy8m(N_$%!bI|=R%vJak@Z0$AH4|Pn@wlyq4LM(i`{Ubz>1~<{-*;6z zheN*!FNhhSv})!FRX=R@kfWRe+0Lo`KNcJj;qB#7Q9t`M_&n|6QBty6N`5g^c@pN} zP}`3}!I^ys4J9$a!+-4gWVDF}TxXsA#fKcNA&BrMhEe6hNN`xuZ;-n1fD1KvaYKk` zYto??#D^&O8D8nq&pLm+8Z=PZ{F>+xF)oew-#+0%aMzDs0AG$Wpu*hNccQzOz9PqC zK@C7i#`+vhF}|6mX@|S3Z?#^uW%f<v&>#g=fNmway$o!~>kfyvrzgyfzx;4|^=6#T zdg0#TYxH!~fW{=($4mt>j^i#--R|AA$!fKdJ-hg$$bW4jrE9*;0I`JylIn*Y2p&F< zcBsUCI6@I#Unq$EPN0-V=Mq3z2_-Fz&qm47lru>|45X9q_l@y{cm*(AKDK}TCJs7_ zuk@{*P(W!L0WxOueMAD)f5|WC)gDRBhUzCRbyT4d2GEdC+6iBDzMSjm#;WxIs@$2W zYpd`diJF)bjp%AK2Y^$77<7G}8c-Uo=rPI4DU*4Kp)aC_n+d=i({8@DINEE{pGB8~ z{vv^^G24EWMN;v*-w{c~Y!&F_;E+;wRIBBJ$#X>*_ipMDE^@z_`<>TG{gW`Hl{dqM z`QI#g3T;CgS!^AH1f5SFom^iNlNdoj6`I6@Dv&#n$lRl&kC~Ozo%SFKg>>+wo_ARR zlFtaBqbGGO8|hlwx=%6xBN2-?-gt#5Ea*V5EZ&&MC#L-#_S*-td?djl$ntWl$5EwS zNFqwt7G=Z+_;tITwO3Y38F@dGFt5WDB>4*<%MOw2892CpqFEg;3OGW}WN<j)Ajf}+ zliZnNE@a<2@k<HXqmQf`%p?797!86)MV}StiPW4<ZExzS`;!RN5=gx!2iTkv6jUg< zxq1*<@Z>^dTXx3nwkYtr*c-XU#l@#}upy=_BjtzW(T6%{^U1xsy0FFETD$y5(idVN z4>^j2u8W~FFPAP4Kg@{zv}}Bou^~}&Zr}h|-)2hwr+S&EVu-+z3n<#uIw$wm>hj6@ z|DH&G01;_7wg<Fhl>V+Yd6s@QT~>5|$zpFgPWJ4q4q-ApG&xd;Rabbt^1CoToYM;j zFu8N|xrNbItpW;3etlNz8!d)7>VTb)@piisV~H+SS?52|qHrt6_gh5sZ7&Bnv2^Mo zvEKPpx?@+d3;b^jA7C-=gaVM_NW<N3OHba)ND{64X^azOf$arO!e2eydOR2{2h;Dd zfU>b=11qGXGNAthBDENMCAE?+kINPDRRS`XjESxd7g-4U>uAnd5?#Xz!|674W%}Qy z@cQZYYJE3B#CLL0+0;mNNlT&W_}&_ND@Ev)%mx)J1zqPq7UVx(mNbMI>R<vZm4Stg zoR=XjhclL;Y1gkz<##rJoqU-SbdYNPi&}!X^u<(E$9MASe{G@YLsmL8O1ri(7zwY5 zA}g!DVTrdll!1OGtHYa9Xs2~uW%*(fwJghAinbdVTRhh1eU=i#0hHBV;%K$QcLE%w z=bg4<-xrVYhmk~NnlEK&%(yk5oxP!0QBr>m2XqzlLSjrgn>eo|X#R0QyG<Kdw=yS0 z6qRBF%L?h0^#TZ2z($CI^6%9md~gqbg-@b+sLdaricM8MNvRWLT<+Q>lpCLpwc}mR zdBlyXbW1bj1fOiNhtl1Byn1sxkd3^JIkiK@UQl+el@};5vP?`n<>F2BO`*OW(jNZ5 z<FGj9C6s(QABkA-_Kgi|8D3raq{FP|Y!={<K?9@weM$xOOBgH}eP>PfiJ!Z4&+WmA z6Zx+y0utDPxwM%`e&tnuIZQ@#6%z)-KdkU}xXk&xV*UWy`bVcV>?TQ?%JggE&$5gq z70Fw4Dnm|dVmjfMj?o9fpPrUNaORAKB0rn3cm~!U251_&Oo@>rfqDpoWCH%?ygbp9 z2CI6J!b&4G^@c-mI^qA}bZK{WhCI9MLgXN#-pXwI{&)(BMOCQLDn%6zMHL#4uYW*t z_qBLTg%?&sVhOjuy>?4WOHl*)BrHN03HAfKC2C_+L-l-PHt>mf9Rz#RXUufpNz+EO z)&>Fdf<j~80@%p|9hQfZU6@#=W(>_fF(GkeV(9OS{MOWP6Ed?VlAfTgp25!kk^{RV zCWs>#zG%8dv%aOPqNBY?TU{e*(KGa4uom(m7k`IOu!ztJh^)w5W<ge-c7C}OME^A$ z^fKF9s>^N(375&|#bt`*z4MDR%i#ouQ;KO>Kuk@ua)Q~2Na9n@2KgxRW#C5wxRtRw zU5Fb_v4NUHMaDl^wY>v1e6io8O+c>UTa2JqrS8J8l`zP)+f0}`U`z^ed_V`^6J16H zdyj`DHLpR^;_LtWNMja)C1gwyKet(?7I&xfC8CFlpyn|J|0?2C8kT81)qN-4o`mSV zdP7kkYV4)~#da$ETd-?;Er7nl+k0vEl4|^@3>YuQ(8}K+(i@H17|gPcCq~q+IK(%u zw}*}*9JQ>%?6L?=!<w9Xu1qyZ`fVzMtY&SNQd!}l>D7@*#CTBLI;=M23&=J9oX$ad zv)1aeiH}T5l}C4#9tovx0vqwq<qX5twB-AL&{iLF*H6jPx?_alu%fmL8kH^TZ<9Gf zFXbEW*tClJeQ#kwlMNx7h$%DGf)ik~&g)S|>&WnP1e4?tN_XXXl>$Y~jEyfKY-WWx z_C8;#^>+nPb8G(Zdp=C0222S$m;b@jP*J0T5R7c>jPo*(ASYt&i{4eb4Ly}My4vmF za)b5U;g=!fuxxgO^GB?1N@nIW*$!W)_9$QveVs&{9SL2)R)@8#e)#E2qs@Y@O^z|F zd?l>#Z6nA9LZrtu+yufu>O$fH)iMS)A&G--qI{R|)WrN`QPOyjUgq;2)=?ifa^2p` zca5Sg{JN#4PHeOEi#b(uW;=R?yJb`|nJ;A#A!Jfcwp;^Wk)v_4RZ1(3$2Ml&q?yAK z?0N<3_EVNbB`8KSnse@5CC#nEV8@91)aJuv>8HNC^Wn%%d$e%NT6^op9v<x4gJ;fC zS5=lvm5VqcKpbi5a&kygWRKzB={ZJF?3snK0yN)xDH*4UJ=)3LpeXJQD`d5J%Pt<7 z1-a?7=Q5$eZC+y+KaQC=;C<oZ9im+En_xOXWjdTHm9>!b_V$*lxcCmq8pIHC;l{gm z$jRpJUk=Ide)mVjR{rHUbW!MAotryU+R)&N`od=WH;O+u=<}4_Bf;nc=2etr$x8@2 zc?h6toBb_Xiso{G91{W4H4Ua_gb_+XbK-lsshvuS5wYF1ClMpX$MHk+`T4ow+B!TT zeT7^%Fdlw(IjVK!g2m6VF1@-Yx9$`0=Tk&>c6Md0Bux{s+qCfwOj%7%wsq&Ga-@KF zA6Dq|($jcp<pl5VCl=HUZkC$5^n`pLGR!U+vo@t8C~>51!Y^i-<CKH@?ZJu8U1Sh; zkF)9m0VS6|FTQ|wdD6r=@5S6P8j?dG9lB*JHWl#ruXuXZYGR9t>o*j;u`gj=p|79& zOq6w9I#G`t<Bb*oe*bV2QRrbXhR9d=M=t2X@LQGwJ_A$26V&jY_axj+g*~s*wfA`4 zJK5}bQ-0i!Eg{KQ%7kO#d{`uk5L*wh2tVM*1tK!LUC(jJ>5HdQ*NzpX7uW(_D2egS zq@k&)NqRtknlpan6)}5zP@9Vg%x){NE%!k=<u&k|H>x${!t!22DM;l=bK0d=>8)*K zmM}PG>5_9Eg#EW(l9P@cl`cA}+jL9iT`=RqWLzoX<w_VNoG|Q~I}IHI=;eNlVEdeU z_>-CnLPZLT&BdKPPg)Xq?}_Swlj{r7L)N!S#im*ymNf%g8Z-EG=>(1Nmp?)0_UmnT zyg7#Bj@O35@-y&)r>GXD?~9~A_m)kPoxAFOX!5uK;mvS2TLXHXvG6fCV!FA4v_NVy zW`h|2`@1hW6UCiiBrX0=X;=OZb=StfjosK{$o7mZ4Kf%*ro@C$mU!fe8S7Za=#8f` zm94RsvP44-BYT$Ki)4Aq&J@WqSt_DzVP;BXE0YY~?{n3aKj8iC{^4Bby3c*i{W<5l z&V7GAF;cp^x<jfg-D!pD?ia93{#Qe9$L7LRSPRF8LZkkY>WY*w7Hth7!uBO-PA8m& znxv2AsVJUkEZ;{)cc(#I2IdPlweE8-b~vSsE&DHNV~Qn6GKq@kfQXF8dStQZi-%EN z?^^SMCt1<0={KD;&=Z=M0M%<3#O(tDl}*E!m1;UwPC!G-6o_$Nj>c+9{Y&0UUID60 zRva^%v**rj5-T}{`fIaYU0nqyUaW(X)(=}jap0Q5x>3bTlJ6;#1bvoZ^|PE1x_uFi zvNH3WT_~@0my4@2s0couIkrqbW}P?Sy%d1zC7toJg8Ul*&7T1n{j5P>OgVt&W)E3n zE8s5Xf{_O?8xyxg?P>2kz;BHe=r$Jdv4}TdP6A(#m-6vUL?Eu9)_5kB@4m%4*g5De zne6Q2v!f?)!3-w~uP6C$AG7iT<&Qgrlk8Qg3R6F`&Np4(<nsI*cV^InWc=10{Wv?+ zTX#*m91-O2cA-9BSes1H=13Uc@-SEb4&RYWuBxg^r^f~Fxm<SRwRHO@<RK1Wn;p>( zE-D5jk_`Ow<tk4M1uwa`YSfPqHt^g%M+<o@O{e@yxOu?#O|^@Sq-dg`d_o>4X1V$8 z;GmQ)3A6UMZaAWnajoVfoE?(0XsRE-6n}l8G#9Y(FiUF;ujg*f5S$`~>avwuXXspY zX}Vv-Zkpf>(RGB;h^-hYy1sitZ+Y#Kfs9%D<LEHL=teA_nWcqtixd5NIvQ>hV2r`2 zltL*oI3Zd!(@PkQf96!y&0LY8U#Q(lfxPy%I2-7nCfFmLtp>mSI}4Chx;VLCNh^#i zL?EGc9u8Rwj2>^}-^!#+u4SLyus(BU7)vWIC};4D!==qKyi74cps@6gCas~B80x=n zThB!QcfS(5XyQsLNM!R3WS;(~CX3uEJ8N;<{}HuwgI_h#P@4Wl#w1qCvZ0G$#a?Cg zO$Cgcm$iSL{?y1N9{C}peGOzbqNL-QHq|r;8JisivXHC9$i|Hg4PCsq`8viNvteg4 zL8^AZc8TfKhdoHJtZW>=!JKa%_1%^uQAXl84AF`=*r8;q7FBy@Pm%*y<PURURZsPU zC8^z};E8I&L6*SYd@m*(E7S`%BsIgwdY!iVi(y;Zn+JRh=$7ZqX4Hp=hoR2s9WH^N z9mgkBJTUd5L3~8{)R(V;^jw9L(e<$8t|VA-DM4haTpD>Oo?f~Rs$E*QsQw1DQM$vO z*0)6q6DrWdO*>b>_G&H)Xf+&Z6jw5Cd5&b;<b=vOi#eA12APa6G>2mmpsW=ybaQ=? za~L%t%p9KfC>%Xsj*+*>Fjw5cDlqxU=Gt^VI8m(biNTnB^fC_&<<Z}vObp>2iROK; zgIy^)96Hxspe=ZhVM6O6xe(Y1E7Ru)t|Ss?1y|!(fTc9IwD8=N3%*il-EMdYS+u+6 z2TDd;V$0Q-jha1jYpX<ZB@f|zEpU0^q=>^_r``KvU06L;@t}kb%C~yssNdprcX1@G zgyZt&mXN<K{FE`-z~+Ih!Yj$a_Z!ziR`96H;2*M3knua4r|wYK*yl6`Q@j@&>1Mp@ ziCCJW2DdpiweIs&bLo`XUBsbOin7+V!Ogh}QD%OA{y=g{N{o$*BVXTK3O+60${@?8 zD_(>kX#FvaQHOkV9^%0W%t@Tb!F_t`CReZ6{kcw^(udfrj`LlO7Q9YW6bP#2FT3fs zi*->CHMAI&Oc&JLS4p|ArzR^ZDx!o8qJ?v`<og&hv?7G8M&@6EvQRA}&dSvT0>nc& zK38{k?%1@i=m@fXL4EST(=PAfprn@?36Et=htgE7Om{aNM@doGuOhMfEkQOLH};(v z5)yLsOS>H=LR?FTObK+exv6j`Uj(r_j!b?j0J{tQzBUUhzFs!h-<*o6K0K%3qoHTr zvYYA4{)seB{0-ED77_|lS60RjTgVGuPu0nhST(EDIU(f{lmmYWLETNvz0qTYqF+So zk{zXQo>UeEF;DU*<~?;PHv8u)SPzv_<<q`BWDb$>PsM_i%#oQ8GBiq4haQ*PBgVqL zuUS;}qldnVFjksVMEW}>;$?eqNz&PtAV@=h-BSKP(1zItx!!pEhS8JR+_uY~UE8z3 z7fLf&vTd3~KK9|uQWjqAQ6b}Mr(#<Ee+H^LxA><Q?;24uv<a}Kl9efE9wkFp?-WwC zGxlVF1mAYw<R29_d#!ze$)J<McBg}0RSHs#KKb3pHEPF854s0iS*YxnydC=f;i{nP zCV$&Rcry6|+Dc?xfr*@wRDr5SJ!#&oGvqq10kNA1moI(eeSYD&+!Oiwaw(50hUWuY zy?4Czvf7KUR?RHEZA{=D<cSq@+*mfwbro21ZYQj>Nc$vTbriR_)$ku6XC9IS8qT9j zrx)jlpF3JGtH>5;!LzJoGkaG0vTh&i#Rcz0UcQ@vA)9CmquQ1z5xgvV__K-gK89gy zwHwaiRMP3T9<>duXIaYlo^6vI4smS5OOL9Pt@hd;!)S`n5E;~`Z@oZa#Rs0*PLd*M z9PhT?zE9<NJ<d0O1z88`r?Ovj4u1;i9DA6y&x9#%G0<_bt$!`6PxC{eNf5>3DUAMi z)1DF8I}Wk<V=+g(ADAYt6xOQ1r><1q2J6TsF7Z|BXwiY4A5~*^-|owYCI6tN_Lg?D zf4ST6ncF`S0<)*S{N)l#W-o|v?d&}OP#6FKP)+}jb!>5@XfFVWU8~-zuX@CqwC*hK z0VhV$0f5N@03h1|0NA0R;eY@D@)6KL$N;g8%Ak=c0x&!!03f!9A0X%n0RZD)Z?gCR aVEGU0==~|RnzggwFu>N@(W=Ul5dS}Offf(| literal 0 HcmV?d00001 From abd62471c02f06611320da87293a5fefd31b284f Mon Sep 17 00:00:00 2001 From: chivmalev <lbivan187@gmail.com> Date: Wed, 12 Sep 2018 10:52:54 -0300 Subject: [PATCH 22/34] nuevo canal --- plugin.video.alfa/channels/maxipelis24.json | 12 ++ plugin.video.alfa/channels/maxipelis24.py | 125 ++++++++++++++++++ .../media/channels/thumb/maxipelis24.png | Bin 0 -> 26399 bytes 3 files changed, 137 insertions(+) create mode 100644 plugin.video.alfa/channels/maxipelis24.json create mode 100644 plugin.video.alfa/channels/maxipelis24.py create mode 100644 plugin.video.alfa/resources/media/channels/thumb/maxipelis24.png diff --git a/plugin.video.alfa/channels/maxipelis24.json b/plugin.video.alfa/channels/maxipelis24.json new file mode 100644 index 00000000..5c93a817 --- /dev/null +++ b/plugin.video.alfa/channels/maxipelis24.json @@ -0,0 +1,12 @@ +{ +"id": "maxipelis24", + "name": "Maxipelis24", + "active": true, + "adult": false, + "language": ["lat"], + "thumbnail": "maxipelis24.png", + "banner": "", + "categories": [ + "movie" + ] +} diff --git a/plugin.video.alfa/channels/maxipelis24.py b/plugin.video.alfa/channels/maxipelis24.py new file mode 100644 index 00000000..456cd828 --- /dev/null +++ b/plugin.video.alfa/channels/maxipelis24.py @@ -0,0 +1,125 @@ +# -*- coding: utf-8 -*- + +import re +import urlparse +import urllib + +from core import servertools +from core import httptools +from core import scrapertools +from core.item import Item +from platformcode import config, logger +from channelselector import get_thumb + +host="http://maxipelis24.com" + + +def mainlist(item): + logger.info() + + itemlist = [] + + itemlist.append(Item(channel=item.channel, title="peliculas", action="movies", url=host, thumbnail=get_thumb('movies', auto=True))) + itemlist.append(Item(channel=item.channel, action="category", title="Año de Estreno", url=host, cat='year', thumbnail=get_thumb('year', auto=True))) + itemlist.append(Item(channel=item.channel, action="category", title="Géneros", url=host, cat='genre', thumbnail=get_thumb('genres', auto=True))) + itemlist.append(Item(channel=item.channel, action="category", title="Calidad", url=host, cat='quality', thumbnail=get_thumb("quality", auto=True))) + itemlist.append(Item(channel=item.channel, title="Buscar", action="search", url=host+"?s=", thumbnail=get_thumb("search", auto=True))) + + return itemlist + +def search(item, texto): + logger.info() + texto = texto.replace(" ", "+") + item.url = host + "?s=" + texto + if texto != '': + return movies(item) + +def category(item): + logger.info() + itemlist = [] + data = httptools.downloadpage(item.url).data + data = re.sub(r"\n|\r|\t|\s{2}| ","", data) + + if item.cat == 'genre': + data = scrapertools.find_single_match(data, '<h3>Géneros.*?</div>') + patron = '<a href="([^"]+)">([^<]+)<' + elif item.cat == 'year': + data = scrapertools.find_single_match(data, '<h3>Año de estreno.*?</div>') + patron = 'li><a href="([^"]+)">([^<]+).*?<' + elif item.cat == 'quality': + data = scrapertools.find_single_match(data, '<h3>Calidad.*?</div>') + patron = 'li><a href="([^"]+)">([^<]+)<' + + matches = re.compile(patron, re.DOTALL).findall(data) + for scrapedurl , scrapedtitle in matches: + itemlist.append(Item(channel=item.channel, action='movies', title=scrapedtitle, url=scrapedurl, type='cat', first=0)) + return itemlist + +def movies(item): + logger.info() + itemlist = [] + + data = httptools.downloadpage(item.url).data + data = re.sub(r"\n|\r|\t|\s{2}| ","", data) + + patron = '<div id="mt.+?href="([^"]+)".+?' + patron += '<img src="([^"]+)" alt="([^"]+)".+?' + patron += '<span class="imdb">.*?>([^<]+)<.*?' + patron += '<span class="ttx">([^<]+).*?' + patron += 'class="year">([^<]+).+?class="calidad2">([^<]+)<' + + matches = re.compile(patron, re.DOTALL).findall(data) + for scrapedurl, img, scrapedtitle, ranking, resto, year, quality in matches: + plot = scrapertools.htmlclean(resto).strip() + title = '%s [COLOR yellow](%s)[/COLOR] [COLOR red][%s][/COLOR]'% (scrapedtitle, ranking, quality) + itemlist.append(Item(channel=item.channel, + title=title, + url=scrapedurl, + action="findvideos", + plot=plot, + thumbnail=img, + contentTitle = scrapedtitle, + contentType = "movie", + quality=quality)) + + #Paginacion + next_page = '<div class="pag_.*?href="([^"]+)">Siguiente<' + matches = re.compile(next_page, re.DOTALL).findall(data) + if matches: + url = urlparse.urljoin(item.url, matches[0]) + itemlist.append(Item(channel=item.channel, action = "movies", title = "Página siguiente >>",url = url)) + + return itemlist + +def findvideos(item): + logger.info() + itemlist=[] + + data = httptools.downloadpage(item.url).data + + data = scrapertools.get_match(data, '<div id="contenedor">(.*?)</div></div></div>') + + # Busca los enlaces a los videos + listavideos = servertools.findvideos(data) + + for video in listavideos: + videotitle = scrapertools.unescape(video[0]) + url = video[1] + server = video[2] + + itemlist.append(Item(channel=item.channel, action="play", server=server, title=videotitle, url=url, + thumbnail=item.thumbnail, plot=item.plot, fulltitle=item.title, folder=False)) + + # Opción "Añadir esta película a la biblioteca de KODI" + if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'findvideos': + itemlist.append( + Item(channel=item.channel, + title='[COLOR yellow]Añadir esta pelicula a la videoteca[/COLOR]', + url=item.url, + action="add_pelicula_to_library", + extra="findvideos", + contentTitle=item.contentTitle, + thumbnail=item.thumbnail + )) + + return itemlist diff --git a/plugin.video.alfa/resources/media/channels/thumb/maxipelis24.png b/plugin.video.alfa/resources/media/channels/thumb/maxipelis24.png new file mode 100644 index 0000000000000000000000000000000000000000..0fa881d1160fb6628d900507d81821f526c7694e GIT binary patch literal 26399 zcmb?>Wl$Ya)8<8UaS!h9?(XjHTp&PjcXtU8TrTbs+=3G%xVyU(+;w^P&;Hrk+ONK^ zYR+`ksWUx&o;uV0bWcaAC`lv1<G}*}03=x%3AIo8{+|Ja{k-F){#N}IaE>y%E&u=m z#(xF`ATtXG0DxEiE-tR3V&&lK;9}+ANFpmPPU7h7VDa7d8vx+BlC5s3p?>^T@L}Ut zOg=I&S>8bn3rM0S788h>Kt)Fihbk9In!k*t`Ug!?5*mLfFA^#)F7OAIDg$CP{L+^_ z(t@~<!pP{6mu;Uyn}v>tgYoyKMWMrAH`xu7UwVLusWR-UoPmhtVnk?LA%p#YxA&Mt zg5W6}0jR(Rb5fT#atOeqzkmQeS<ja)0EFi(0ua!n2<~Aai1<J`5z8=u2n>bjaZTWn z2L@sSM7-m~N&q5K5P{k0RO)~{Xn@J6ndvS-g%Myv9dIxU2+V%W@PPp6CzE19<Rt(| zFup}f0Ic`{zb3V!B>_4t089({eqO*5BY;I#+d>9V+XCpBKt-qr01*K!s*z!|0B9e8 z$q+fYCm=KffGKsYEpSC&je1D;iBv|tKr02WWRO0P(dmn}HWLxege*2aCKm`2lqtg0 z>y?4a7Q}(Fbu$J4<RxHz?)K)*a{{?~VuCBK8QGZeq#O2w%+z%EeSfmjQ3L?kcJ-fr zXJBr?3*?6lw0ke0JcqV2LdbJ_h_R?g5orSC9j|ELIQ|D4$^4(K%gg(FdrNZtV)~#F zZNGQRUgI9^H`^zFf!Bw-t*#x4AU4AwY3RqT-mzPSVuHy;xKPuzgLvuJ7KD#i!dbF@ zS<_Y>I?P>Fbf-9p%pW&w#bi-piL?WhGau$Vn+z{dLYtg`T5BN)p9ID-Lo=v*d1_Q0 z*PfjR0N}da;m;fm5Gv3rWP8%%<3jjNI*$?%Xd#>E1OOOGP%vqXGzbp@0RV}-Ai7#n z{M$b`3_Y*}e_+=BAiSDzg^E%1_llv4!J7vXIGfP^3KpXYAFm~(Ghv<+A>`;$wF*yi zL}u#M{EaN=i1G{q=KP`Q2!ewa9Rgy2NLL~uOrv$k<AB&1VUMJLWT4SwC`g7QF;&Ur z6F6m9)uJ^>G-U{{1)U*zBMhY468MJzY{9SL9n!4HLG`MFbBImit!3PJp;841Q|7L$ znekuqz*EkRc!EE;@(-sftx#6tg~SJ0dX8|uu=m@tF#YWltK(sSOEMd+Bm0F|`4_KN z>laje)ScO6od+~}kYo=W7qo{M35ZBkMn^$MX-TGxgn)trZ3T`S4k}c&mzpf8Oumt- z4ehBP$CQmVLPv&yd;vWkgBjH)QoNs=0zFEcjk@(Gq-<T0#tiz5)Qr6v`7uK|m_r7S zJZX5v+@}5uZ!!Z3{ZQCm_Fmr}`yT!t>9sz5wyB6y(WCkfjoOH{9PU2qKF2=nw{$j9 z&B9Ez^|E{oG|qTEv8w!LwHnokG7Jv4Oy2R3ud?#RRRt-tcvjd}58rE!T-(srQ;3w( zrnF~9kEM^&j-YNm5Rt;MLIxu(X>h?fUvMmOJ#cE$6-sbtV{ivGSh!hF(+@LHG8{B@ z=$h$#u@Xk$Q{+;pQt;`RHOebWDmE$*G;cIVHR8*S)cvZZG}+a6%i-oyD>W;!RZi9M z)h)~U%OWZ)%N#U>i~M!pqF{CV6#A6Ki&XRKjCVW!k|3DjHh40S+8|Yiy9IIfQ`L<x z`?}M$r@s-l#|T!G=ktoQ%du-WsNE!62!Gj4cY4fnRf0RDDNVcakKVw%pxvRxFX2#V zW&LJgQ<0jMs#7Ubsah$0+R0V@uQa=)Q?gz6B?wt`P>$MwTG}#x0w()VeOZ0I6u$JU zq9EU0HB*I6F@Ao#gh#i1>3Moimr-Djf<x(sSLfj~fgkgm#+%)9>76v94r~J=1?(!~ z1dhM}r;~#KA@ktsm8!U5pNM!dC<T<Z9h%TOEl@Zw6)T1E<NIHOU5Z`1zjl8MsbHx{ zspJ(;6wfo&6kAJZOZXM|6$Z!0#tX(5GC0A4tcz@i;6v~tc(cVs&r^?~xvcs3H`)3< zy%OE?<_`NXJzm{n9ZfyM2C1s-D$0fSs`gT}(pIG;rOKQI%f8yH=Hce@7TntXCD_&K zX1iuTkfkZMWy?Ty-+5AJl3$x&?3>^hq=;qYN<uwCf0ssQOg=5Xjm(Lxb78-M+tg!b zOPxv7$tCvPW{gqV{_ol9;8|%)Y|B=ckekSuDEZ(~rZgcYF#g~5q0T)y*E=V}ZSR@8 zY;Xg(_g~p+%{kZk_w$Ig+%b+^+8kYWm#MxX@h#Q;X<>xi6c(G!U;fJC%0ZS*hvdv= z%n906i?uDq9u*Iu3v8i#A?Ms(p<My)4Ymzk<1y}@mZ6qW<vxcT<FHNf0P+v>5ATm@ zKxrT)ge6o|kZ6!T<jbSCFKKt2FyHF8`}*a7qCfIlMX^NVgR6q~|5TA$4W0e0B7Hba zpGcVajvS`qf3z_51#|$4>GMaV4vPrW4i}DdjMTxKW=-Stm|hci*_3#eNR`+CTe2$i z)-cL((g{^CSqrRiw{f%y|7Eh!9h=w5){(W9C*WnGtLCz|?b>|(cN4Lby@RmpgPqPi zWa#LXKhb*|R{FJ^37vs2RYW0Di6{9|6qT;*>krCn3=hgUc&}c<UQ06OX|CzWC2<2f zBBg5?Iz{iqvxE*#FNy*7Vut<JQ0nrZ9STSa>tMc|ubJ*7c5H%tova~=2|u00Do7QR znk`Q(4=mhAe|-z9e;=@qX<;Tx`7&(N@bedpE+c%iSjmxw9?c@%mwL-@+7{j8Q6s#m zl{xg||Hc|dR)2D5UF3fJL+?f3PHZj?m)B+_)H&$<WzAbJ;N-Omzl#M!AFc)aOW=2O zGo9VMa^*kmv}*rODsM%bAfrDc!GD?{Y)%n0;rGM*PS+Oy87k{A>uxrIp2M^8g|MQc zt1(xx)35@TXx2RU@gp2Yx--_28_O&ETg@jeW-N{mHW+jCky9(u4$_eAk(O5)kv6p& z|J~h7t}>^M)2z0KG>f+ub_@LO_*ObRG0rcnQKWIJ@vOmhKC|8gT70yhKF9azxaT;V zU)Aq;coQ1#TGfQmRGhb%|MnvL((oW$9aVAsnEGO8Sfym=^luPq2I}n#Idb!2>)bry zjG%^E`~v-{H1Z&E*L(Jb2O|;}l4fky-yL#C39r552e{1#A|$>Q_a8pLw!h30yAxUx zGFyvfUS~>y)iYo8P4(N9$AqRAPBy$>rB{hjvN!~Ez2t5*hQF^I2hS#1UQhnWPRaK3 zXL?#QSuJSxIW(Ktv_^5bXhU_~o{{{BIppu)pD?)o?Rs6mx{sm#TRXFc&)Dw7^+e!3 zX;yi#J*z#ly4Ap@_1wF&$;aa17(a)5V*R$$>+0*r!h8;|;Ckncx5LB0!`<fBb$@5- zw~5jAWe@7}st)B2<6Dt`qRZmz5%a?O?`umJYXR0h9(gy@Ytj#}VIh<633u@*;xO?U z@uT5L;iow~LcF{?!p?8UuchLX=95%8*zX1Jqt)17nf)K<PZuYq)2H*(uG5mb?R<;h zPI^vpYgYgE{w3-f_GD)@y48C5_s3(+gYcqo6GeH?mn4AZ!|JK-PJOEMB?&hv=>r-n z{pQt-opJS(Z<3hFs3`&f-jo19U<d&4^zkW=0RT4^0N}&~0N_gp0I(erjR&Lv00TK$ z2~iEtm9saGtZ#$5hw<H&N^Pw!>F2{<QtUG{tEs<Ch7(<25t|{*8t&_B`(eIz85ej2 z!F1uIHnt1fGBc!`n?W_#IjN<lSV|?CJ-8ybICMSqKfgJhubuM~mJFtLoT*(b$}@Ks zxxCDkyPT-s&8aI3QldwNiWN@3R*Icy=S^E}+i>79`o>BXhSf}4HC}S!#eVIQDxB_J zwBpcv#EX?eFfRQ6U-Vyp;{WyT|Ary|7q$Nr`};p?|0iHO_(0sIR>SSur55h3@x8}& z-*hg0XUqJsX+1pGzOo^H@?w0A`=jW4$^9O>y7_2G!OnZ``|?1BB;JBL{*i(cK72~# zi`4F6_pwOp`+XUa(3jHT8Krx?8HzhZ4M@7=cA9;A(I)!87blNmg-38*iW9#>6uoYe zi#C*!hnZ#26n6<y!bg%xnv@*<e)mV-c;ITbUkySH2C@`W7+2Sw=6DoblSfX8zY@PD zV8M48UTEDgcMH-Cq#_xTAZ@(Iao>b>e{f6^^G0FrtS(WBzD~q<Oa5CG3KdJJRU}xI zphblYW+~XXjLJk>{F&%Fb8yWrl~Ay730GKF{#}#}1j?v)ix0=%@)o^m>-`y>P(QHS zR5e!QF}s;^Z~)~Kl`IAj11e<`Z|^K2*8Iph$oa#j$=!?6tAz_U*x&p^3)df2)yA_b zdbvcjmvz?|0VpoHZs@B-3tb)>3yM4#jLe<A36abVvD;p%OasZt(t$`N48qk^s(iGQ zL8Ne`j;eG8=WU*8n_;LFgcze7DvI*dK(dPJO(T?5t2ZOb=uCW623qp$+MjBTcqxYw z9Iv%*@kM@5hE)dPw3M`i(9;a!wAQf$Y=D`#u<s)af(Fc1^nG=R1fwDx_)?g(bZ)}u zw*YZ?HKiAqHvh=>J)-i1_TN8<L+92%6@`A|ShaXX1mURKLEC*RB0SAJ*?uB*W7cUR zqEpjYS@|#(WI{<^)I)(9sMa?Y-@HQ!=HU5Q6aOJ}7mpKe-E1#P)#{F`-4#O;S=Ktb z8K&bLUwlD^NIs@byfX4<9Slex$L8VG6B+tF17-N#-LR##s>So<5yz-uo2U8DI=4Mc zomI6#mciDH2E_M%;=}Ol5La_`oKPUdX8OfeYmA?QJUAI~`8OzVbR<Ddt$aRX6q}{U zylq!XZkoNrIskdqUGLkE@#Nn;3184E-G4KWHg69c-U`H*Dl0*d(y5uNBexi~e3wnb zzTR1;{Ypn91`q9MbqL6Ct-njk5KB?Vwm?<;r32eDOa0f%6N2XRM2>ra49?ZPuGTOS z_VL6hL}AvvO~~QY^E7~}JxdA+G3t~Ov+#K5qs}kkaRc7aq!E)7J20ejqEz3V7B^z; zCLM2I8N#AtY!}QX5A402<xf?2=d6fmVN{E7PR1Qyrox+j1YDfq7vo1A8A96at(eVR z+xo-G$x2A0$OgI?1#jIMQdO(H2n2;kzqL~(HMV+On5+sS=YosX9+>d`2v(Mr`9Mw8 z$`jf=+^V*21^TegwkMO#)stUOvIw#r=FP;F!&mR!niqHHOa*P#nJNd1GuRHP5vsh; z*4M<14^0K>+~Nxl5B0>9sTt0B8_WL@Nl%QcRWl<4xX+HrlN$9u1XF69-8-f#1{eMi z;J<u5Rdx8S0FzfCSs;}{$h|&rehXRP2~h;s6N7y~qhSylxj7$X*Xdf6ojTm1!f><G zDqEBc-D{y95cf}hWqjX5)dMONFsk#;G^VHCUz|A_|GFss_>3{xVsze|&-8vI+>s&v z(gJZQEF6$-{JJ|iZS}4=Q};N4vex&|{p2p<xHAL!aX=n0a6GMxh>P$samsyN{&8#n zDnT8u)qE*DiHnIJos2kaK6C5lfzIhe2D67Cd4hy3jwf(GwflP4m%Hazvpzcc(F4WP zJl^k{Li}DedAXMmQ<dZQqJo|K9Q#n0&ukCuevP7j?6)KqrIo~_<zt6?pN+WK&3@QK zq}xkSr+oic^UB!p1gRzy8u%dO3DteQy_n!=dl2pBf7|krQzPhCqWO0LDckp&d~>{c zTL|#}Vwt<YD0EMWXB4OwU1WAOd(j>f@G`+K0_?gT<MrR;7FO)iv56(Q2$)rT?L`ri z=}lD_!+cEGe52dsVMSARAsa{yBFKGDX<?bOrRlz;j=x?s7GAd<i#Av}5cXZP^WBR5 z0^xadN$G#&(WTHeCn6Vw8QjwCP5rUo=aE#aXo2u)+e6pLqVey!=MwN2=huOc7b$nd zfs2!Pp%=*RyENrltJF_F>3`jBzrES{PYNJaBHwYSEj*Y}2cy9Lb*!>xbBOq3&*pf7 z>M|tK___ZR&PAmr=p^~&*Z261K|gK>h^>bB?0w6R)xKownBIU3sF$ZE;tm8-6W{9I zm-pF29^Ooc&I@t=m#3x4j<`SP95=;$!tXD}`}C8s%_nu`!lQ%!2W|vQ3-|%?0X$;? zPkXa-;LT4D=6}X-zb(D#r7oZ|mim-q{3fkrtoLh>@JsmPZmw_OMjVAG&Q)c=GeyAF zK1P+dxjI+(19{DMkbVo~WxR6MuI2k`A10l>yvU~4B$5A{xW~2*Y|;(Z#!JGai>Ygu zW_)&^q3^4u@nBrOQn1QZ46zUNbnanVzzJ%s^heO1O&0=J7i0q)QOh{_?(6Eu#pvGX zUCAYkL%R!g_nY{`&Jyf^Y0$&ReWAeZnis>Y2>eo8eu(f(D|fAp88(3Uqc`TYRM;mm z^OQWahl^79xf=Xb=dq61qn$&j>w3<Jc`D%bi0HOhs`pUVE>tRIk&$U=<xl5%zsG}6 zjQ2Tz{=6s8nZwWfRbH#-c$zmZD<f~ZL%I*)@tPeQ*TLLVrXxBaYgJ-dlLAo^Rnqq8 zLU(5k^|MN|cgtW8m<+gb;JtcM{yB;L^fzYrx6#@%L-HwsFt_CTT{-8@p@2Im9AUR% z{7HH!Ef+su%|-t%J&W1J-j&N=3Cr2^S8wq!FP^<f)wXJtkAR1D-H+oix=Et$Ijz4@ zF%5Mo(`N-le%p{wYE^E6#uZmf+c6?*QHJS5>_Y0c>xB!Kp4UgfSShGb(JnkkU-O(c zd=;BVj*dX}wCgNzb@R#9Q>l~=4R;x*?}>enYu8_L>A!lh=SyK;R0nh%9gjpQZG%@a zSAxuoyREacRi{YITermm*witfws^Z{y(Y4K9YG=zWb5Jy9@pk|cS2?c>OXj%4LBXd z+@;dKyk>m=ba8rpr3;rCBidf)V+yB4*i<nJ`;}Fv^Wl?8gH)zA<-Br20Tv~9OrHiM zxHMpbA;G$3S7%svJnYRq{RYLFf6bWV*o`Y%-n&$sRyXD~x<+EKtA7nbH-zD#D%<3? z7ZA-p(M{&$7&Tgy$|;O#>MhzJ<x<BpFPN369JaF0+THPZ`={0|I~#;+AL3imL$d|3 zFm_;f2l?Ex9xa8QtiT10B3N4D<kE9rpsCnB3b<G&dKV)5hk2VMo^JLszs)dD3;%{# zMB(Mkv~(?5^PU)tAGp}LcEO}$if8h|E*<>JlE$}Q?RrT&|Nf!n3}}AOE#RzY{I*Gm za;<aXPjxDYiLGQ>7APY-Q%+%+-71OF9Xi~~My-@nd^<IWBkWe-#`r^{@}|`mmqDLY zXC??otm<4B^J_(paT=8Qd|P$jdfCG32fRk8-nLLlk{DH+N`AyQUHZ8j>xshqdaM(D zSLkxhY>mY|#kOmC;Iy+<)nE;lA3O4l%^SG#RUkU`qlbe6Zq%SbGx7jyw4TMhzhoOC zd6k5LS>@wUIq~<MsKkfpj~&V6=+rkqtD+&qGm5%5t?)g7LyOYPBKOd667}O~_iLq_ zrqA(*I{c5%Sg>Hc!ZRh=+f3ub!pbRa$t6p*e_0U%e%#>R5MxS$#0&>H?u|jq4pM-X zSk}P1;tt)h+2CC-Z?0?;*9wtVJ?gaK%MlO%pI^E^q*azl;h%K>QkSCkk|2tQ!@ev{ zEyh@9K5>4V)fW0I7=<Blx5u82-sUX(+{X3_Zdvagf8i!C?&qK=REN19z$46n&PX1A zYdPO~b7;d_Z59Fc(wMqB=V<g&rMWNF)kQpXoJ%z$KLr0YM-NWIwgC1<PS_?+*amxa z-+P~TIPt5=)ysm*eZz)Z$+DLzmK*yBzqDNbY1vR>Lz_IbUENuFl1+@0WcpW|&pn6| zFi5}L`Hdt8mXZNEHxrP@4lW{3+IrdaWUuJ}(W}WZjQ%)Pt(Z=bO_W_;^BHyD_&8W~ zOq2};fpB2Slb$RD(QWZhb~Zb%0PW%JB0y(a@GiMmu*!6b+mQ^v?_sCU@6+&D8C?FT zl55VAYfV0O<cy;6e=ppZLEJ{l6};WeiXpq?H$?<V1|y(a>OPduSaENAZqLW7IH8D1 zqh;9U0yhH(_E_%(w{&`AmymvvwK9(1Jep)_&BMh6$hH@|5gYE2?-Lrc+GtKEYDA;S zu0@1x4YER);UR0y4&t-g=4@R0`35&fLp*zG$PXKhQ7s#-F-2yHvMb5fM7u6nd^1jh z_N)9u4;}8p*|??q>HPjqIpz&SQ0w3QkL<YX9pse_PzLv2B@%iQy*qZR4LjcE+f9>n z;jNK!8+Kuwzk}l%`<nkCzrXQ2Sk&?pcGHh;Ee$M9P9;R0UcGY@?A`m}qmp$Aakc2g z+s+(p;B#NC*DT9ZeJ3xD+AOQ{`WUxeor-_yKZGllN*KmGh<zYGfoU7M>0Q?S39H|1 zclaiJ3E_WI`a%B9l04wH)y#JfBlnP5_yOx6=G%F@7q_zD70`oVY3M=F({Ob0XRV@9 z2TOz*uB!zBY}?H1)GBh*h9uWFa-#Q(53O~77G#RDg_E(gHLds@C7oOMZ+e1~s8R~! zX>fgI%d2p(-u3D7!FF={bV*pAa)!#~4^p<H6hFw6a$0-l&IHu@W8d2ZSC9Xl^dn#; zMuG=a?<J{a<<m04V4F%>>55JAs6*_S@Td{?VT<KDIpa@c_+1awK0WQCWwj1@G^QM_ z?P^{N`TBufwr-?i^Jt-xgJq(I>2#1zMJofx;h`KY8!b2|?iQV%!qmE2VTqxSgCe98 z7x8=J#->JB@F5JNd|3_sCyUdnl4jbSN>Oa$eXzJr(Gp}(pV>_4K^1}~qPxwYohxho zV=Bm($fydHTtS48B6aq~j_+40-$AboYf24zI9kOq+QwAUpPw<i+FfgT8g2<<d4cD) z!M4`Z`I<J5W1NZs^9Hg<7jleOw$fv1C!UIvji|xz6PR(GkAm=6X{Pgbf|gw0W>aJv ze+60hIVdtSEf!Lz*%}&`6NRjQR}gRRyx76J`(*y?2u%wC`X%P#$|d6`>AGdx&mbF^ zlM!#sO8c{P8zD=kPbG?t)$K^w0gGT3G4PIbKr)MP7;kOZMD7zYj#jACwfdMuNEuR# zV|kwEYnZz4Q<NocM!JQ&kaC@d6c7iGU{5~a@xz4h10ykR`CzK^l_{7-jDxfB*6_Fg z64R>#mOfqF*BODdwxs$q>x``d=deR#0XRA%xubu_d(VFDk;xRY+57|SXPAXoGvH@o zule^?OrYpo<1sl<P~(PpoZDtG0Dsj06ZPEhixFkP<8*+sPC8VvO!s-GdDrQn)8AYZ zEu^c9k5l2xrP;~dC!u*g!eUIR*<MJ;&cGVqoo>g{JEV>s&()EYc`MI9iM&Fn&_Z<Z zHLXl(F|TX3+{Tp`mYz1)?k<=y1(6x8UIi>-GEv|0ZXeu3Sa1-TuSPCq+a%#GPSk(d z{AFP@cY&4yifxid2dDiL2baWJ`QsF027NfsojAKulza=}K+{4elboVKgJGg`O;b3j zw%yuXGtk9@E!l6}$c`rS?Y_Epa}YDtMFs|;LT-9Qi3JdD+BfmhiTvG^UP(99$Ce(N zfmVBR$_k1O{9P5A@G;YpG9^vLugZ=MXGx7Z%t3c)CGet|rGHLc_reu@SmJel<YaN< zWxM*9BNnUjX$hh9_rGe<j$;gxf1EK*3|~mGFJ`G;hdkgQKQjLkkz(#o?J-kv-3<8> zbD#aA1j+W6WNbeRvAZAuNq6tg%@e!MnW}d&BRV2bMWK{C`av;rq_dqmhWbClm*HXJ zI`~E&z$GmrFi63@1CM6(VG3?$=CCz<E3jdz$=csfzl~3fM=CI@W!A|nNCI2@!!Q)` zUhzah=5<bVVYP|r*<jK1(Xa~N;2oTi%3zS0yS*q5S1-#?cjr2F`q*9VYhWn`dkY#? zSXph5-pVu_RKPt&g({%0_hchT#+3vACHh1hh6v<@2o*%j2oyvOmXY*~#RCS-H(+e< zX?0IxT}<T)M>r$ok^Opq1M<5JY|@2-NDE*q*l=qeJ(;=|Me<PHzgGJ2r!P1wr*d2# zPV^%ifDG<ONV7O5rW7gMyI|Q{S`8=RHOQ5D{L-yXbPG-hx)ysL*0Bf|PV1>M2_rdv zm2B-i#tv0&NvOVNfZe{9DDv=%y(LU0E_($|ji5@BhJ*TN6D?lsDoX+_<Fd(pFNHev zV8;4aLh?9RLL&vsXKb<Gn_{o8&^KfR$8%%tcFlPu1GK~A4BGA(Lz5L0CU;(hfTE~y z%@4(%u}>T?1H2Uy(2Ti<@>ad&z{(H4GeeYS?RUA#O4YpA{)_xq?^B1wcH@AQc4>{= zC!Y5+xX|Oqe(>6x_m8-H0I^LmayOgg)3C4=Cn6RMP84$DVkg4)(zG<}-x|3e>aQzK zMYE#MTOK55XV;sAANX@)x?|^}YKsBfy*ggdG(yk%p1bTa^s*;y9-zB(qfMK;Qwf(r zAd{e+Mh5hL2t1lp;+)%N)O>^4x^dZU|HV$Ma0I;I40p#D_7<o8KTg-Uue$;33%C_S zgJZ~vt^YPjs%}Rf0<ODv;uOe!VM;)-vCG^apqSkK)Ag&PUJhx!<E(vR3U0^^QUAaU zAO{)D1{P{Wf+*|M2#3s3V*6Q;GNSy5=VVir$Q2Q~d6ycANE3>##?n-4-jt%}?WLp! zWi8o((BY)|X!$X{*uJ}$a0aLRXzqAe&)VrP)=qphru1Lv!{qMKqsF+YeM5e#=m_W@ z?KA)7QoXffIG&x&CO@VC_5x@ra}HVA+XJ!xPNa6W;%zSaq}vtzh#wfm`_mOyOptLT zkq!skNzkR@TV$0+*j1HgU=pVl`E}?3k#R1b>Atoc`P)Uq2u3qE0xOhR;S2e?JPQWd zzq#lF>9DdDSN1J&QB0<beWn0@>WGB1IQ}ZP*r_z1Itu4L#-$MMZQ>94UU_API=yN; zZyJr)cT*_q*ltbLQ^6J#PwVk+n|0$b8EX0aW)zTHO@mn?)xmj->AW1YW!F;HP_>zr zH9W6iEk1V(7|sVpT{EWCh-akSg&M9P^3?reOOv}R#_yg*$&|sKJ1vZsqC&6ae&7S9 z=8<KHr;#T`uGVcAHb>YUez18pRs1xTZ<EJAO54M(h}8!+lrSKM<_A9meORq}q=ImT z1B&RPl12eWoZo?7SPctY*)VcW%>C8X6{i0YNIo|-DX*n6rtC?yC|O~WS*7A-3EgO3 zaPY-_?^ssKGe7DvZKtn+O`=JFP7TQ4DfICpm<AG!b6jm!*Dcg=gGMw%m#k%lKoOcF zI&{AE+gI^Zhb-h?;4jyU5$>vSv@nPSbP4U_i{zgsRA@@>>yvBmEa*0iU^r|hGJ1qf zjYGeux2n?3D6&k_aVKM@>g{)$__?6PD&t#unKN&~yQsd%6%L$*mEA}+AnlNSH@NOB z`?C-bcd4EG^x?UEboH38$bK*2Nu)$I9<lL|{kpVzQGAn9Q+he3{WHxEw?uY#Xtzw? zKX0E}j*`Hhw>HhJbh0JmRSbA?hc}s<yBoyMt&Z<Q^xbl|T)%*w$fT`>{L2G5dNxf9 zGVsT+B@p3of!S9!G#|1gmK2%VXaM7e@Npx`VvXI^-F&!6Jrop)2|4+YkxoBLYt!M@ z$ANOMysd-VG32#OV6=&dAR4!P1+Tj0yw1p5O$AeI;@qX=R1nPJhnT<AXa)qOpMIS% zApg_r2}6p+CY$uGAfuD5f2V9sH$IT@GhdPL*BmF-K?x3VcBxj`6zS<%q!!_2MpmUd z4n2qxhX3J<d3X6fn8AP0$eMg`h-W90|LQn+vbPZ*{OsRoP0pYV^_RK=NNP2QUz8nq zJ~w|MtbUnJSd)0N(Q`i0xtV3gvp#6M6fex(Yf=;S9lpS-i3@F}?i7N^1LX=q_(PS* z|Ix{uO_VO%Iu}fh5en+Rc>6P$K9u&@hw?^{ajaUAHbHXc8OOxXKHmEo$`!?-47xqA zm5F(!dY$xoYv%C;+|CH5@h=9>HY0}YB<|0}?k~hGUZN}puapU;%=uWj&%;kBoS$`7 zwx2^CP~&$jtNa8kyAwOk%<Qk97W8UBGF*taOn=wpLymo8W4Cb}R@|fX`LE~55O8Jp zx@*miIhXrd>&Qw{OqjKaAyU*7#^y8d=*@J9QuCK;>6_m7F-d7)a3|u^&}NF*&Deia zi)&<u(OB8dS%Hm45evN}xm#1hIJicZV_fDcHMuD8ZPl$cx0OG&;Ls<H;UKne7F`B; zTVPX{ioyD&8RoZ9%{sMhu?ww}+zQ@5M$e!9Qd>WKl_2OnYeV9?hxb;~4w#&k%yrq- zgjMIgFM8H;s&4;5=#w1x=btOGz5W%qCLfRRxhjfbGZJ6A8dD63hG_QOC8^O5<g5IT z$6Q{w6OQ9*&prhlDULPRW;+n)t_q@f;@Qx3OYSq`i--L#HPI{J?&Ap|Mrn)2_Md4- z$4s-A)z9@+uB*LS=D-FV8#2ltKjPSkkl9=h<IySF;Vh&s<AudpoQN*wZX}*)*)Co| zB)tRykZX`Krze}Ee+%i@I;__F;ZD6F$z}aO;c`t2D+V+38O7vDQnZqhLDA4Jg%};C zM-*Uoq4}xT`|1=UsAB_Ub+M8{j9H`HV;elk(PW&G{zNn>vcaZ3j$!u)Q@Nid{Rre@ zlt1shl8s|JcSrq&MX11qHjew{B2_#wq~r|prg)}?llB?Tit%p#uLMt5?1=wbQ#7O} z>raPmxx?gS2CCTiq<9wD+UxBY#7X8?7J@Oa8vc5t+KEcq_|(FbXqY<_Qq0hM*xeLT zd%Y%g|4kKtxQf1bJ|6hCN$Gm$&|AwOWoc+DU*9@uZ_KEH7XBRru_HMCb)%Hm?U~`& zwuJV1iT<Nv)&5}YDTV<vnx}a_6BfT@IN;?m_vWwgBXE`LqQLhGQzmN2Rke!Q9umGV z@;r{gr^PPrO;mo&Z<ZrHD|>&=JPhO!J=0)JPQV`_><sxagDWz1_4a`>fm9QYU@Xm$ z1w{@4s%^W3`T0|*;r8-}A2oJtwhS6*o1Irs8dFk+R+Z*6|3_8ZlBAM7A+Q!0j_Zf3 zpDD$|t9p5h!SvS_R>e#lay8MJXxbE^4Q+7VjSgO#_5Oxery;$Wm~I<}4L`O*kYGHh zBl<XOz@3qU#FG(+CB=l&q86COb4e@g9**7hSUN`!gWPqnP7mz<!k-{oZl#HVV{-6{ z_BT%OD?b~!Dv!mf8tdZHna@L!db!zsy}6B*9a2n%dHk<@4*l!Iu&Zf0)yP9e)&uSl zi|kUO)iAucwF!!{A}x{M3GSGMh6N@q!|0Ci$6iG1%d=uuR@r4z_xqQz@k4zG9@#;V zw{vDhd=6-FL%VlCz0RzlFt;w_X`uvh-gV+)y*1X>^6TQAUP65dh>f4KFTrJZ&Fw^$ zF8BT7O5>07&tb$U{d01!4-X8b+#UiS^rEI~zIV0hQuCg^Fo-zH&j{-vEd`Mps@5u1 zr(w^hin;6>9cS=0jY4o5y2U%bp0*=WY%0<yr^72;MMAEFA}83qgdRD(F6k=sgOj@! zm~o_cm8ehuNmHojH&`jyvddVY>D?VUUn7fj$t%V|=CE3h2r|#j=Qd$bEh(yuTin*n zUhvd?<3`W`|m>_T;}P#3U=D5llck45h?x?H0j59&r^9!M=-&=TQcCayZX2-${^c z<~Pe}e(dsx<{gC1`APQLKL}7gH(w8lC+(G%)~XLL(D<zeh4@~+XFEQt6Yw|+Pj>G^ z_}_dy9P*^WAf?BR7b*)+`fPZStMx0%Qc}1r>ibeb7@<1$Z4e^fy1H!7NXS2m0Sj36 z=2{Wv?omwKT`_3Ul(G4qWx7wL0|U3#w0{~QWymmJEgOD^j|P%r2JqHJb!V)~lXtSl zS(6!g<cwmMaT*jQOMqmI9k|1jKrJ5WKKRBa>W&GhAwScUZqdUi$r9NpXraFPf@L~6 zv_e6=pJ-9kXO9mxI*2{GTMWpHZlQ(!xlW)+p`2fii54r6+4gqQFw@D@X-e;6d||P* zgkcXLYSHu7l~Qry1@}ioS+hS%3~{zB;+Di_kC>F!R|Rwl7If=HY${?t0!dlNQc69w zg-TB)<;me$>>`}E9M8@qgIR3Be0T0{s40JG6bVY*-)%aca#n$__mncDY+m<Ev>oS) zk(3FkG2eefgNo9L2tlWgDBuWg2IEN&g1B6P0b6%+cRDDEvL5oAYT`e5D@vaCm7zD5 zsM?QlEAOM|Xc#$txyrV?VWwZe;V;8{PTOj7CrD{eVPXQ~QDo@ln1b1aYv+Q!OcyN8 z&Tyem=y&sIkYSXVPb7bi<>GGcI7%yZuXFmoaA%tZG(R#9-h)ddODa)DWeplUlcQ0U zKD$?Xyq$>nGe|S+T7X60O{v*`H?buIhvSnLDe7eBr<|U`J9qZc3mG@D$QP>hxF#~s z<~>z)8j~`HW&{s?AtNi+^OMCXPxJD#ytBO6_zjNxV>cJgZs2(@@9I%9Tx^GDKjb?s z4&g#R2zEK~%v5tJZ1z_HMq$(VZrZFeGX@^}!p(#@X&Bjs=O7{DKfmbEV6kR)r>L=Y ztB7SFhwq}co!4t$d?DS%z!-Xk{?d$eH8iWGk5R{%lvFyYdh`U*mPA{S;iUEuz#X+} zEY0&o_Gc*x2x54NJ5alMl4%NwH(VMj`s~_83jfl!x)2sq3_suu)5qc{zLW{?)qaNx z6&nLANHa;+mcc|yFqv5D`9-j0MLRexPg!9v5*S1*3rWeNKUpiR|GEGU=V18-9i`CD zlVU816NPo?FHgjc9GXDy;eI#s8vU}YNH5B^sOaR5NQ#K(ABmb+D-aY7849YhI<Ju^ z%=*A?5@_>XN**^MjgX;jzF$T3Kp@+syhdb;O0(pnDeAA$R#9!-X&=2sbn3$w8fwUW zgF^n89sF&7NV2@mFaQ&eaTw#76LT>S{;JHLmZ#Th(>fzd2mM1yfz>!fp@2PZ(qU{m zVU*2S#|kd@0O0+)10CVby&apQH>@d2p*XkPx7OtcMXXE#6ll3q2nW@w&^W)&Ao54t zo5+IaPB$aqZ5|K15QADw@wvj_Q6+pq^%31|ONxxiKrGg&S^3_&T_O}<@6Yn(Rvv%+ zKlyQFo7|p38+oF0$!Ffr#&zbXQ5pi(5xpSMdp0vfHhWttX7AlRBns6Q6J_asvHYRA zGlF6@7&KZc{3wSCQ3;R;rLbbP!*AYaJy&E~>xsM6*NG~Uq_3v26AI-&CE76P6AYaE zb#&9I=BL|tUYC@<fk|_=Ug|P%=3~mFAQH~?wGAOBi`j(ln0sA~;`j5Cdz=%C>dQT$ zv<q+Ri_^Y}?Pwuz(0L`cZeW$N_W9UTKM5KN+&pwEAI+0Css>4KwL%+(uv{*B3W;uU z9?8;jq1XkIShagJY3&~k@SdNqhLAy*(oZz~p7HomAQ>YX`At?SbbpG+BlMJeO@=I^ z=&a2S>Nf|X5s!N`H*rVoZ9YFsg!@+arL4wx+xei;+rG$v3c@{y5<+W#G+Ir86_k6> zkj0N$R1JE!gRdhLv61cVrJjddI-|qQZBv@3o57w%abi)}y&8Q7t;-y<w`NFZ5d0CO z*z5~NL<2>U7}UtO?yO6TicMOUzI@7E8(Yq5qc+$9FH%JYn^Hx~OgVC(pvIH|1NyeA zV_q&qn}N|B{TRU)`Y}R`3<A37pH#nt3eF{UN=SZ9kS#L4;@w?~jkcCx7A3-#JXQG! zC##jw^ve}>6~7d<KjW_WsS9|i2OE@%lt<}6TkNcuZr3>V*Q6YjjVV{rypEEcZZt;l zZ@i2qBYz;NI>M6fG`aWvq-ioAb_v5xhb)_DXsyBK2UA9qg+?wuW5dG`@pJX!`@f1m zHhEPPYAWRtlcp#z_=H30M*u(0)w1fmtgx}EN>)>u!a%5d)W;X=f6wi}p3kiU&clJ` z8C3DYKFLQR-4a}u+wlUTJbtc<cIoQQ<PoF#RvMEW7`Rf1w{A&tR9{1|Le0&sbC?Q8 zJnpl1;?BJ~rl=AvIwZ)y9;%~bAsC4ShB4dvQfB<h%*wp5D9YKf<sI&@0gZTK1hKlM zcI7a^K6?)pCFztzf%<9lYZTs6<fxXK>@bw(KG}X;l!xL@JY??78BcbIK6ct7-wUww zp0UKJQ-Kb?dL}5eh~H;-`bAFlnJH{I=GJbbWoOf(4K}I(b=lw@PCV?Jhz{Q1$!2$T z+oOH9w^P?XBehx)I&AObg0$+a`_Mgd!}kxn%^5#fkF}nyxPd;^0&k&>BFt(_>@r(- z#A&tiDu-~8js|u~hR}mRr+$q-y9Md`wBqvw3ToCf1MLH=J2JHsesB+VOnEa_5wo&< zCfGPM+R+B#3~xosQu+(~tU6B=^txd-h5C0g5W1G!Z|a=hlvS<Uw51X_kjNhA&a{x) z+zI^|d@}5G@$Oo^1GTcyo3H*g{wxQN|2<j5{Lf$M)Hq%g{@peT>f;ar$>#5pC!)bc zxDW1v>)AS~lBm!$-)F3TBXUe!9=sner)8EJv40f=kDpaTUdB=&kw;isIDL!bH4Ef5 zgF3|5HPR_luS|yXO``$U#+G0=>i#)i7D!@uF=wZReepg%8%k8*Sz6d{nl7t?0@mrU zc;g@r)(p3xI#5?NzjfPgI(Bq4^ZY4Qt*QDxZs2~N;qE-IM0)+E`9PFeg25aGA&UCh z$s@Z;QZZT6&Xj+#vmmDvg{g}9<&tMXo;}!p$&dVP6(v!ohJf*ivm%DSyZSpCh0a=@ zOUk<*4pp!0AXw=YwMa~+$v&bmTgK(;sit=Sxqax~p;b;9WP0vH{heRZ`IhuVlX%l1 zhjhA8L=z{k7x0UsSBd<L0W|4fy;`4^t<Rk_y8ruW%g88T)X2!A57|2y9i#iLPscyx z@S($Fxseno=KhbMw_?KBEgq%>&N6rR{maIXwD5ZxkM`M{22BpHcQ90oPGpxMN1|Zp zO{orJ?U({S)5HFR3r9CG^N*u{V^ixH<cQ*sAT#>3fBS=;Zj+0$LjYymCAMBnG}2%% zVM)!RgmNaidjPw0I<5=5WF^W^87vbE%$nayD&_O(*~TV9#zp6CbEo!agg@DoP+~=^ z7Pf9%(QHSHYHGka6Ug-$6y@d>US$#jcc?S@==UVS@{8t76*L7#S?xR7D>P<xW^nkN z!4@dKoGv^G=Q@VTcko7q_1Dg7)nEy}2>lv#DRg1ma~k%Ai*^=;FZ?N#@P3$18?8lV z)dAe2csfeaO<LTQ2<+p6yNmXQ1{--8RhBM!kfA{=>N119GIEIYolNw~G9h|SIoC0{ zq%MOGq*)V{-Lr%>)MBWFFYP>~qAnvC7tW{RRP70dVAf}M@|OqHRBW7j_Sdy9T_K85 zfu}x%Ba?pFy~H2L*ptLCvlLIw#M`fPYxSyYDB|vR%b~2?y?8!I;BggHr5rOGkLa!( z`;mRQ)pl*Vom^u7VDP;!O40^PM4NQob*c70l`)dUwYk=w{5641oulIR28=<cG8kCH zZH5<m<HNuQU?MO~k=Hr>W|kpT*5Zw$6kl*G+FzkHDS1}ja;BG+6zw^tun|%?vM{CD zj>uwmisS}C^57bVE`=^dk8l%ZryokqM3*W-O+jITg_T#_Qabg0!=BHDmte5>26DtL z*9cU{4DqLl!Z(&|P45Z5fy-&YN{ig_)P@?Q0zX^NiZQGida>Qki8ntp(koX~mqlS; z6-+Gg+bA;mJ{m%xGR|t4A>&*$S{+*yM<=zXd9P()*f_mI1oN@-nOiG62O15#gf27| zPj$<w2Sqm*o78Ayz!ny{YMHijp;06q4zDm?RWY0b#TP}&{qkfs%dUnp9ha2A57=&r zZ_Z6N(>YXaH+06LS}4ULDCW5{`-mN{iiGL$jlYb@KfR&UIB-guLTxC{NTVvW74(HV zM_@PwZ7|`&#cGr3*RZ7{HoleqdI3#85=^hs&Q}W=L*E>5_2y(+*S!_ETSRVHJJ&bo zP$|pO-9KpXW3h&HkHU&$E#(x4Oa@n|=7R9>*P^nqF|j{j^(3OT^%{onO}6CU_?{+r z>2>0C#kl#>fX0+8tAa@kYPp!i6+!DQv|%=(tNE;}+kp%vIt^A6I*F7V+LP1u@Ouu^ zvrjv%b_D~b90TTP@(7kS687wmcvdzra*JuJT$c)2+W1c;Dy!c_4V2jJh5cZ6SqX}? z#yhL=X_4}u-o>>C5f9dszAfPfeLrH!B*{{!GLH2$AB~y_N(c318_N+(k|2wLVqw_* z77Ye=r~l-#rP(A)d;-I+8ff}@>M7vl|DGx_?}GY_s>b@GN+WxZG8dmhWLVfV`)V|S zpS2*Pe$G~udA<oU%!2}~GP2N&Ucxt6(N2kj65_}PLc;+ynSTM)hZDIbfS9>0QH{xJ zPeB1|RHfX~`&MO@fIVWB{CG!n4G0kRXX2_*RMl$UMXPG_%aD_olf_wN_0UARw| z)YwUjOT(0+e2|-!?7Hqw?)OO>PEc?BEzXNov|Wz_HN9%v2%`BK7F@&*DMZGXOX(*O z&qL9(tm=T+slDel$eIMCkvQ3aT@Jr@nzXDpg5fpk;WINOHz!e1-LGZI@EjfJxP^GR z667!@DwW(c+7(O9lG04_(p=&m+Hs$Ww7Se^!o_<Qe2~SLq%^af@U4(orCK8+bDUj} zJ%X6QqLj0!|7W6Jl~S>qxJ*#;*OVU9>NOHGtw<KrB$bhcEL+Y9RUGKhpcd<Xtt}(2 zyZy|<XwY({cV}M@b+j+uei+ZxVYCnBXC^sAD4`n3!Vs)7V43;txds%qMzYG?_$VMo zhIFM^?e;5v<dxzu=b3E(DawuT8&MbpXf}a5w8vqtwg{k3mg1{>%_c)0)Hv&u0}UB^ zOhSn7aKN))`VOqSv2(TW_8a#p;`6?0ci5WSo~&)>lP?HBalPnYazr&Vp^s|c2tOnK zEBwmyek98>wy?0Gg#ZVIF22g!GL@&oK$bez(%EA<TM}#QEY{nrP?@Bb9r8mlH#S}Z zLj-4gaYP5jH?Vw;>-|Y1R=WF&p}E1A$N~p895m%9NaS@u)3R2FWHH$XKg;TJY_OBF z$NHXue>p}#UByoW7A!juSSK;RfDPJA%TpUyWzg24Nlw?NL4SlO`Oa~8a&;x)aszXK z9X&R7I9#OW>S1yG@0e<PsVYGXUbF}^%rrx;N~zDy$<Skfz$|SXciGv5K01HERx!aX zy1%^*oc$9Li$e=7hQw;lN+YI-U(1NAGIf;0dY*G!wwUafM5cuMK3ccx1UXWSRPkK& zds}@AH$$nH1<4$~P=11x&asX4UPkpF9{Aen9t1H1=Kh4uw#<`#7`mtOM;vL|^0Wim z$=C)K|G1qq9AIm;udDhYvdeaB&47z0?%@OiR12?8hc7D>u#(mn;dqZF5&euo)h+Y+ zCO1#jDDPtHRA*F@_4gZzvcQexuLNB8HiV&8^-%SQ1iu$Y`woP|zdmsG!ofbbPHTtV zaUQR@5@fAQ5YR*uCD{aB91#HnEA1CBXRDn_IXjXadYnds@glm&Po`d1sK|o{x5w<S zgCFl+ME>_cYNaewkdQJJSobavT)TZ8ZyXR!?APZb0`n^wx|-6LlH&L;wCEjAp&|El zMzfPx%{ch{i&MzIp>Ic@*Xg<GZzF1B8dmmhkt#i#iP>-T>{d;Br3Tdr%$OPJIfd-t z;?j1gNt7}T>*PvKfz81?0RdqVNi-=J3OMpG;&2=_Q^Tt;ifHR2l^=;tmZ$?!ESg0m zKUAhqMkPhfARLgk+hh5R3hUf~HEK8|O(&6?Whqt~(a>G(Cya@$)t>xiYEH<v8W$BM zE?b;Cnt}8T5$4jwO3fH8LP4j}bL)|7@_FU02^a(ckL-$vm7+)Oq5k9(5SDrpznJZ` z-9X{Pvy+_J8j$Af{FkGh#_IhG;rf;6Ph2$AMlmfjMZvI5vjKxysJDh6TNU}1onUt} zez?Ea-_&S_r5eOI{F1G<$)@*ia6%T%?7d(P7NB^zHzNY>VNOs!2sffE(Nbft4b{N+ zj(ZwG-kot~jJTHgOg<0i7I{MB8ck~o1bE8qsU=JGc3^dErmNR0VbW+af>o;ZHOTI< z5k&Hta%hKuEuxL9)D{y9_D2in0S)N`HIGPI%!yuFBOK+~v?iRRi{f_nMS_9C)C{8V z@xmBFu=z|rIT&P>7jN3yJ7$pc6lU7dPc31Sj2!$^tWKd-&CEhb8I$&vgv8d~-l{5% z%<A!zm$J@fZ%;KtlzgoqAO_p;6Z}Dq7W-+=Bx4p$AJ>dtS%U>f3dGY~)BAomUCjdn z@{2tV7Rj&8qG)cYzvMn|&*px8iQoDBh+^O6EZQ^4f~ESXoxQaT^UJUfhFl-WtJ8Ce z_<Wqycrnd>8{y=FAJCU3yYx%-(xbW~QPWFeI~$YKV^SCO)a}2gkNZJiaceOsP0x=W zg(3>3<!t|3bGm3h9x7&u9dKkLg+6|A!8CqFlpHK^dFZ6?V&hM-A?nJ<$+$X_E0v)} zm&lh-7M9n@A;a;3Ees(3yPyq1?#A%hta>a(Q4ct~xQZ8wOT*t9Tha|Cv4Ru+K{jKU zO|Ui>k?AXNkCA>wtKtmG68G`AoWf8BaS?K=VkS7ARqE-0FcZhkAA9(X`qn!wX*(`o zEA&{>{AKjRZ^4vuO|S|2RK^USy~oc!ZO0d<q{li}3x5u<W0Heeg!Jf>*knu5Zy#LM ze`)ae_$TXiPN$NOu^?hjId)RUVJ;(@z#S`4FR`!n1t|&|w3Nrk6)Ez8Zl<o}vvS5T z%8U8ab{LP;{ZqZoNmV6IaV0}ZE)6^A1zzl8yHvZfR2;q?sx6UR(*g6fn&7isku6g= zF9*ELvpm`11q7HyD804b{*4Wo@njH8VD+ZWkbKr4lOeey%NByu`dg_(ZNnie50C{4 z?UJK3IuJm${IYzlUA;SIzrS@{1E9m3#<uU&@-;&DDUMg`Ei+P}^B_^=p(BT>O%4fb z<w7m01K48`q#gYR#5gHZA~DQNc~INRu6ac5;vBDe%|9T<sywzju90+n9D0Y8S=8j& zIN$G@Cxn5JT=wfDZtGn0hfIv&r+8FZUg7!<p2GRWA})Il57+d;ERpe$Eo03#B_I|d z)&T=bF&=V>0W_8*8w6ucORQ{*kVf46pO5~57hmSAk(d+}W~UQCo_V<jOtU$u2DH~< z=ue*^py~6kjg`wp6XP(~Kq)4LZ3rQhA;H?(U=U-c?a`PCF8V6RirCkPb~Xyz80$Ve z{BZ@cgICO^DY9>#r*cxHp(M9>m2r<j7i!E0+VKI{kXTPLXbGpduc>&JWV1mWiV|Df zFc5TgZPEemx6t_RoBeXIQq>M-v#PqqD?nf$aF_D-L^`3My|18!dJ)}`j#MTRQ;DqA z7O(j^vS?`vY(6VuTT#(Nx!>XYf|#rp+SOv)%5`7JIh*ywN!AjYp!c+nA4SmHh$dI3 za)o=+oZ9xW8=w(b!HjC;9c=Xp@TC0@JC7vZOSy@FHx_nwcDn^@94SOZ<oayLyDOI% zS>-T_be_KRFKtH`7sWbsEnK%iG-@@$#sKQgmwWE1^@LsO_nQmWH0H*Vc{RD$@X@P= z!ry%%z**gS3p-&Iak!b8!r3y`i&E1In+6XQKVkQ!acmh%Lf97{)o4%LYBfnoH28V< zUVDaHo^NUd5zI?X`7wgxf&@!zqZ5CW#zz0})2Fkl76DTVsMARPr6-@-uZmEGkr5Y_ z(I;`73W}XvM|Rg=YT}@Ja-RLKZqDj0>ga9v3Q9=B&`5W84I$l)fG~7Pcgr9Wl0ylI zlt?IDGBkt0&>=lYgA5>@L$l|%5B5HI&-Q+g-hbfxy{@&^wbt`I_kDjH0(DZlh)IDt z4_P#28N07&ur!||;sHo*$IxW%WZy_v+bsWWUzqZI?7>)n^)aRjpz~*CNSdMl7U@0$ zQUA6~jmdM?di$qhla#3r%i)#5eEJWg&c1w%=pscb5(E2ybvzfwQBg(BilT)4+Uh9H zEJ;8#N!V7W^!ki#PdSVCPIVqN-sfCv#`?b?)m~AlIzN$|4K~fK=MH6l^+s=J2FD5< zC&mwCPR`Ft3^kzm1jl9V)<bCfia^x2gvax3pV*_K%I1`uRv9fAw2lwmD(A(79kRHX zRnImy#iV?lK1a+^8*p(LfUDUy;8K#td;Hs~{fnV<BSudqM>Cvqv9g$`4~HAVS`zPR z!z5eEVVO!ETO>?9c+4G)4i)-aF?iE?d4qmVzLBw%ytHrdnUTGKra@K1&N9b3)ezgv zR*z(J7kZB6Rwvp|u^MfDYZk8Wa>l`|^tQ5k4Pl!VZV%v2#kIL|UhQdEKnPs~G>B?i z^VnDIke9sN?%txr&ZwR1bS}>?KkUT9NF`*fE0G5k%IjFPfBL)82vN=(v>&j$XVt5P zxC<&_?HOluBqV=+qM`DzrTdC6gy{trU@KvyewxK+U(;{Ql1qg<f1@jEvAp0>Tc+8I ze~TpM>f*h&?I#JjW12vMi?fI3m5DKn&~osmpXkFr2PcG{qasTkMd-bE5hfLWquu^h zCxfF9*aNx>gqn*hn8DhPm7c#@SnKxAYHLnTJNy2&WyLqcQY^}cIYtp3C3rK?UymQ^ zrxA$nU+?uJ7t`WK9q6VO_?kveY~H!(&)Uu){&#kY;FBp{^7-!~Td9G(P$GZ*s#a}Y z<3*Y;0Q8e*^GmN`)~cn9f-d+XIAfHYmin7uchDZsI9@H)i@Nk2vC+-<zY}F|OzZ+i zx^<m!d*5qMJkfP^EPRDlBr4G*UsBa6XS((T7P5chL?9V-B$!LILXC&LbEs)!E_Qdf zTlwHo(O8&#>*+O?2%LUVp*BBji^FyD@AZ-^i5r}iZ7h-(-}Jy+p!wqn3<CH>%arf( zbn4Y0zG%v00tO`pCHc({&h7@X(dTc*AxVr`!DsgC&XR(ur4Mk*jq>98?oofcAb)!z z&nROEL)fg&746<9h6g3~6dAxr0$P*2)$YKzc_ER&%avYhDl7xQIyRF5U~)E%El1OU z@{O_+@xN^o!ZjC$K~|StLn_(vM!ye93vzmt?-}~P=lYkBI*?K<=RC8xk#6xc5&&XJ z?+&roGW)04jg{*AQRnlW+`*{{t@Db^{C0;}ceR20R6BO_`jla@ZBljR+1fqGM!mvy zJqBqg?{r1Ho$#Pf>d?iWl_pIl2Dr0r^eJU90ZnPESoSI4#L%kE?Lz8dzqL21<M?$d z*1Y`HgJZlLh1y=Sc42sdgW4)Lfha|n#h4M5e4}ef{(Ud8_(pMc4RpMz;H9ttNw@`S zIw&TSbhPkSAuwbXc7WEi=6_#`=lZV~f}Qfo+OF0{x-Fy%STgs@oP#-Hw|)!N-Cz}H zg4+&s_|DABuO2l;a7~0gUxG1#`S$$({-cH?w4xcf;NxSx_$C8yK6j~yE<Pp2%smcV z>c9JaF^bNFS%3T0HzD)N?6U&vRDFtV`NUA&F+cs*jm4@~NjH=i>b}r(Y$IRWdCz#% zvQ13e$bYe5^SaSJ$C<7B>p)ey7$);GEhBR*17&M|cr_4v0al;wasSeNXSnM6scX#y zsC8Vi^w%ILp3MHNBiIE}9>bke0{1qJ?d)B4-yQL}M}nc3+dE_YC>xJ|L;Pr|o$5ku znNX1}ZBH~XCiTiQS<tMj|8Q1Q+15K#^1j8Et#P?BY<+?r+2d#ZlnddzBUbrBes)p~ z5W_06T&emZ^g_63+mL6B<hk4w$WtW%-;?QlSXsgTpqcnN@iiy;omV4<@lI{iM%&Hm zbuu|*+I)+a7OQE4E;D8q%7;jNc}Oe)+l?|C8m)kUY7Njz)ZRP&+0cU?e?i~gB;DDZ zyf5tey3JCTE8YV-{*H#n=%GM|mpoakhfLT8-Q#(V-r0_b+SQWRJwU>JMXO1)dE?7> zLEet~X?M-uFxWP5VJhcuQ~PZQoZvPM)1tiVSiW2ag~H(>s|C~%u7A{=@_Ic;#UH~M z4@i984jw!#@{|i+C8!jLdCt^WDs6u)Y=7oBu0ts)B{wnXj=Ly|_>*lvp-)-c3Ffi5 z_1mZ0vHKBrFkXCk9nEbO6sLcHluMwN*C|n6Y_!t3@@$Genlm*MDC}y>Au=l;{S$H+ z{npad&0R^$0hm^%!aUy;davadj9&I*&M~WRe!Z7l4<A-+@p(dK3-q3gF~b!90!<g_ zn_DJJ51s&<P7y(4kR}}C-~+jMK@95ZBU)_F{JSfuSky|Fo%3X??cNvE=gweZT|c%z zh4=sFVtblyf-m7@BhqJ_35fP^JR*y5nHue`)A^L+K{8mP1t)VluKU3HNuyDR-Lxmo zG-cWl@54@c@QfFm#cR&qSQhJbq7|_GIKqIU;O1F|-ZKaJyS*Pv?8*(!V|myk(F{+Q zRijGq2d%H|jf90Qn5|uY=f43lQ|AhZWC{4W)b#DR$EURle`iPH_Qd5!S?*IGA|IkG zs1C;)N^7I|!WYD7b%|B9`HWjdt?ihtZT)NdjEhuDW=>(GBq9p2st##FOBcASg2&v? z@A`c|ezoAMwpTYO@$Gc8vUb&pc2hob43kJ7y{wY*z*(Y+wIyj0Lyred0bQAczomXg z?Qjn%nR7vmpSUTbJp`|=n@omzw<H+f{w|<X5}p5ATTS+3CR`VirLWLkw5T-KE>0So zV8p`aYn3TmwBdS-!N)Q5cXt~RGUBEYaA|43UDD#ekBcz@e`{`rQ8$;itp4=}2UZAj zXTGi~Fn$3FXNWizp#z3Ro}>xY@sh?r+dV|&fPyRwWPKnrL>*sjwgaeSl|&03gjdoA zHVC@~Npwl^@lf#-!e_s&H+3vuNplHmR}DNA%;d_ci(W9V_lZ=sF+`M-z~(E`iJxi? zL0?fAX*wl{K~l!je)8$+A^)?`(iR(c+L2$j8;fCyeD_j^OJTA~mejFf4ep0ymKT6e zT4#=`+nPxe+5cJNGBalxYo{(cuau^w%nWVyuOvC6&s-XuFq_7k?o_Ll|Az65ey6OF z-ry-!{~J2zy{|H(<>N$G%IpfCoq+pIS$@un$}o4dj#x9W_zX!gsWIi!wzesyE_adg z_gW(NnzY>J4L;CR{W0iKAYG#m&iRoRrx!r(!|FW4$=EYPu4Q5Ref@co&Vb9%<?Uo! z4EnY#-Q^CC$7*M8f)K8h<Q10s?=z?`KX?kx!-$TTL>@=aTUq$FYYlxAb4xrx0W;ZC z-$+YC4iV*ZlW(-d&P&z0A|1b!0D)Oexj)Re7bqdZQ}<wThW+krQnyu);DLfZ;ECu8 zUdmdbEFJL2nH-Lw^k%Z<u!nxwl?Xy&IkEN3swc;OY`=fwUymt{JBgr;iIg6S{LDpx zO6BglcOa*BE{A6pfyl7me#H@F*&NT;DRl!hJa<TU$Z%s&6SJ)H1y^*#7f83AUW`(> z0V?=oTruVGg_0%Z9$itbOL5M3yzQiC)k0MPf%gq*Ld&iIEBlahtKtq~nD(?))_W1F zsHU`74XAVPmysAPk(;X7G>10qU7A1j8#3p<AzkxtyKdj#Y~_P^lKzHU5E<rm)FHV) zh>jvjv&}?Ie)$@XUF@>$1#`RVSQu0FO(q&d*VXVfHD?Ul<zSJlSEzxiJ229J2|t5+ z)KHUi>A3k&|EW7JOEHxHCMdBSIoo{6^rfkXw^)sS{qD?~<y<z2js7v(1GG$)q?k0W zL)R{;Q*H?Fvb#2ax@+9vLF^JtZ%_?M)Ac(F4?01H9WgF1=$W-Ddm}63YQ_*iFZfCX zaJ?F}NgqLjqJ4}Gx+|9yNg-h{;Bf!weU5)<WX*h!Way*niaG9?H`zZn$-BnZY=94) zX?q0c80i-s_r?Wv+7X3)4BgQN`j>1(%zZQSlaeafMxSvJvJj^cF{c+|31Znrhxn}} zN@Hhrqi%W(<Xpel1k^fs<RN9gYV=CR>|eCR5{5#uwWisOcZ!VZoFn$#BH%M8YKV5B z_&t@pLRZ1?b=ItW2V@Tk{q0%G?|GNT`T-~Cjz|H@z(Cn2G}_!YYh#qE?CByl-3N6+ zX-Zov_8)QodK@~0iE;VmP%5NbWd2UtJ?cKT!=?ZFD_IQI3~~zp*2AuFaH0;YetD;i z!CkTq`Jy?$N`{lTEU3Gesp<Nj6oR9E6#&xG92^={Tj?1MK2emFBRN9;M%=&P8bFsp zAADIH{O8<WN?3lDN!cpVFPdn4yc(ceT2Px3LMM3>r|Dly$o;H0`E0qd25O*G<joOQ zCo5Ito(RWxDOKo0Ud6a%x6xL&VNO(2FjcUWuI<8Z!?A9F3!<N&;_D`5`E+uwBzU~3 zKA(*|S`Xu&YTz1!ob6aYP?VSSK2*O1aLW}>&ju64zu%a_nM?0gixxfE)+I4^8Z4wO zbM+|k>oqZax%?GiyJqyIRoV8G`7B%4prDZTcuAbYm5M90Juevc%Ur2TpK(kw)HgYk zNgn4IMD%F`V!G6WiCXUEA=<Yq-f9R~^;9+<kUfzE@DtBE8C;(Y9v)EjN~Ha{qTk1? z?UujAIFxvxMjf;Gm~eB%*SY_*zPBK4mXB-QB3Eu4h<Cldz?E07`^9e`m->p@!)ik? z2_ZWJ2YRh`I>BkcgC|;k=n2*4zH#J&A@8YOb)*Ls2-Z{>R=e19NU(gqm7cWCbGJfS zP-+-dE8B|s0nd{%wuMfa31=kZ8TjdLpB8UvJ@@7|@(II;22+WK3zBrD2WN5cD`2#s zdgB^Bnnss9Mnih(i=X@45CCV({sEeLF&n9tS!VdI0k4+ePMOP9!q5rMbP|xprQc^p z^OfTz=|vx4s7eE%050iw7m5>UhzoAE2mv#%w9L3nwF(VgU30@-^Zxuht&w^DELpgz z-uZZ0G&ZByw|unE9aD)LR0|Q+X4jKTwpZ|CiP_^}=bzi;Y(h3#(_c6X&=<RY7c#`E zp%e-zQmv=4QI8ZIWh02V7}NZL6LIMv;)j3LW%uu0rY|2zj8I-~q#KLrv)nCj{|A)_ z*<=iMNOpOMO%6pMa||(-;+Nn8En+M)UD8?u`_p~fDRlJPUw-yoRJ`Bw4`1J?O?%4E zY2ifMl8Gk_`W8%1I!nEjGJ`Dp_w&SIU^FWL`rpKqh?)b9ik(QOBWnUmRLHPw9n<d@ ztz$OLo;}zJBRCU|@817LEwqPW*TRlGS5DX5s7{@R3L;3i-3t5L_70TYS*RjtGUbT^ zrR5yI3oFfVl8)so1bV>fx=1!5%{JC3&2&o0WboaYC|b~<qL{p6Z4$WST0q^?^~dJw zJ9oA4f`&mMggiSEJXD+?9G$94^gW2Jd;pDx-uqMd0+4Y(JG+8i#mqKJeV5?qw2gib z%%vb3y!O~Os(tN@pM2@ig)g@pbcrR7hE-_F5QRkKH&SVU;>QyF4c43~<<IfKBF|sA z#MEb7<|G>6mBqMdMd!rnIR!8)6RX+cy4Xk!1$!+2OwH=^&nlD2!l5qA<*#~HLuvO1 zWBR9E``e+S&t;D5tVTAP(I1S>ZNxjE#@b~b#45xuqSPfhS$Z^;Jz()D2{TqBT;^W1 z<76t$NSnM{6rcdm6VT>RcxH@AJCOC6FwWaW_sJ}XFfi2w!<Ghz-1g545uCYj?t_i? zI?DV<PHkWhes85$w}Fo^nTz?!GjlHd>Aoapd?gzk60c~|r=$$!GRy+X>eL@2b5+mu z4S~9$+{H};__j7cETBa92_7Ecg;ht0Dh?e-yzS?a^v3A4o_f|WiEVWbyTk+6YN1nC z|D4yJa&;fox=v(bUDj@&(qi8R@7)KT^<y78Ih&I14QBhuJ!(Chru6wffJ4-kump)x zF4dU1YD|O}k<>1e#pqizo(?Oc?Sn5ZQk5iZ7~1gM1};kIujv1!Jjf<<$m!k33#4{z z0|jHf*z@Et3|?mCxpaIxW_f05l3#M}677^*?xp_gdY2g;4_7bmTTi2gSblMKfT|O# zT-kPTfZZangAVpKyOCDV%OJa4(h)%u595XC82eneTm!e3a++a>alOXbH*t+8RqNxV zrqK-`_WV?1*TfJX0xtDGzCwIcG${PuEfiU0^hgzjB8&2VH@5_H52s?yaZv2D;?^p( z7l{sYG2Mr~ur$&n?Hf{+)uS~u#4Q>HilX%%6D)GOO_Y?gI|<o*O;;30v~5hne45LX zjPb6pjvL37X}CHv-3NPWNroj=>kx_vaS{>#czG!#Zt3U_rPyZA<oWoF=z<*~lggAY zI>&Sa_na-)^Y%vZL-HhhX^&c?j)LzymwbD{qhCKfn!<Fw?|c*|N;l>VJTd)(fxFES zR5?Q=^!GEv9BXlK*{8v;XO=X2V(?Oz-wXM(ATRG7#1B~mR)Y-grwWa3PYW|{yB;a+ zHxJ@Y2Fz&2(1F%_RJjF;5aWYtH9}O!?%^#m(CP=FZB?Dhtmp<-KuDBM1pgR%NQxee zj|Al22qq)JcbVd=qNsChgT6ucG^30VNb<Okze5&d!d&J;X*io=YU)$@-zeT@NB(tH zpNK4V$NGKcMv=4`Hx-%Y`uzuAA(cPE#evmsS`o#>lI8eW<VSG{J?}$HXK1c#=~`!B zv)0W8aCHmvrSC$J4}mcW%oV2l^2F8S4hLbD7p<Rq8pv3ub4+@1du?WyV$yEdrE%-8 znl<4jhRm5NIPt{kq3CwZ@*%KdY`uy<cdyc4ES70g=G~OCdVH#h$ltlX040hFxqwyH zxQ(A4v0NGpB(2fRnO8_%eTr6{zY-nP<?q^5=*1%F7Fr|KFH;}Kh)BjE5`H|Kg4zv7 z54h>5uEgV|d4r6PwYF`Q=F2661We*f?+4xAb1FWUQ;!s6?H__E%{eQm(^z#^G^KN5 zRi7phC~)m)OD+8jPtAoA)p^+yX^epUsQWLJmKYtNW}Evn7z6xce*06rrET-lp9CQ! zZ(uv*Cnj(l1*xAo=Z&A0TPk_2wU|=fl)<?5H=Ip#O_#S4_AhfC%e^+N=7oo?0>R}@ zA*N)g*{rMiipS7v6DYzO?3}&2wZ+P3Soe1^p1dF|S=7bgSuk<vNY_U${F@A{p)P)W z;Or7UtIsl4*S7csA+Z}m$Q7Pl>gsn+QONiOsIW6{XVe)+X3&YDKzz#>{l0sk_1cOw zsDoN83`;ERgBy^^b@FMY=H`zLD(2-xDw*g(kGL~@b+V=t^0>*5`b9*{=4ccqP=&wH zmY+GW#7#K{og$7~3?w+YN70#G_*4)Se%Utq(VCZQz6PI^8rS9*eEp-(z{8L+rlaiI zp+vdSppWB%E%}LBtfVljTdcdfiaIc6mhNOxVvV!J)^_3F7rulG@Dr{ilLEcS?uL9X z*MAKwj3qoUm?KAUi2lfU8l_@a)fRj}I+4v|+7)ioac-^q6(d_c21MBT8jMM;>O-pI z0&h!Ibr%Dr?P`D^@}6f`JfDFHkxzisc)y1>^gznjo-#jg=$#(fg^wP%l>ai95wRFQ z3H(-lIPwu=770>OIlyiXWox+-8$4&hBK&H2ubg9RJg@5zf<HY9O}V0U`ZvluW8Ne1 zf`p@AG&|Rq()rI=N5!5|O&Ge1DE3o9w|Rb^ZQU_B%-n`%WqqF-WRDQiw~eoGJJi#K z6-ir0$HzBJU+imUF631mS14q+UpmDxW9kKD6!Q3aetT~iGxyWsagJVE7}uRcVVOYE z7fj(b6tdOZNev|mfIi%gA~1}u|D}t8GWLSbo6v&A3o+ef<f8WYy@ignh^pW9)b<s8 zV?)THdwgA2{Jd{{NP)`7S9C%##D`2n?azAm<;vXo&af~(mt;$4iu<sqYREO3fP|)< zn14fKlj`h~L(#mbi^W?F#EBpOC)zx!9Iss1;U5ItCa8_toy_;|9X^48JG|0!Cw@d- zG=&fdw6gI9pR5rt3b)v*G2j9p=3){?;?pG4&x98x`~{b?J*0l?CZR0_i8byYWOMV7 z5BTuGypfq^{aDKAj$kmv)m|tmpbj`TzFy-bSiG^M92wuLsWk&}^uE88BX#QdW>_^R z3HE_7g7!Z{34-cfzP3{%9K3z~3tH`1D2g+`wet*gDd6?$ZVFgF#|w{kObbD!umD2^ z`wRN;hg)<Z#m?V<G@Y*F#on|dq&lv21hgqyL@nd;!%fnSxqAa>`O<Zl_{&bobyIlX zfBCqBBfJ2w$vym9S<Fx@tWa-k{>i`m-%01=^_d3&Uv{A6)%v1_e-NKYH9I9YD38aW zO7t_oiIi9x`WYMDjSry#tM>>C5@-yI*A^9~m;$o;m*#mKrwrn0B_#Jp=ZN7#;Eh^& zC9vHmH32;KjI~|)8&5R&;#HsC<B8JUCfR|7@#FEp(RTaM;ztbn&~?(Ihs=n#?GJ5> z(3AMbt5CWTn;Nri?1;0(4sWQe8FGC!Oai-kpr{b`xVwGTsV6%hcuxs?e0_&#_rzM$ z(zE`n1iLSV?j5y8Fu=}wYkvu|k@=CYoK-#!4xZa7rn(O!_h7f4P_H<aHt>mk%E?e| z-rM#8HMwAp$3qVZjAO*4&&ow&#A@LEg9Y^1SssN&3f1UZTI+?~vlYn7E??%>hM%`f z{uF^+-egR?TxUTSu!|QFC@AgqAoh`ag7gHY{I!I~yw}p~#OmFt#m*2LE(?6(?qvRY z*ss_FcKuhNBYu7vO7MO{J_N#WlP}P08L_?Ge!W>3%vZ+LDr&T$BY(^Ff;=eRh_5$0 z;ueVbol#^Ka2*6Yn~(|k7LJXSf4mIx9x0HVx!$yv92SV!IDzi1qSgZ5ovT@4cEg95 zYww3(o_lMAo=2NTuv3#8>CMo)rN`iC0om~S1Nw-|Gp{wB+9i@~)H#m)m4&r?`@<YC z@KhfB?<ud5pp+@{rTl%6_9!1#?1g{OY3XV(9uVr`1{pmu5hf2Vp%0cAzo_jgguO!3 zy4d*W!w%`ZZiyJYcs36$E|1HtZ~1wrzZsXGty|v$^ypfDU*>*%IL4&=(@d^^xBk80 zZ*c@#Jl&JO?Nqy3V0ed_mh*SrxzGym@b_KJ$-?Dk!JRrWpltocI<29h%hlbuyv$f| zmv%V?_6mrwQU@_FUJ;hoXGDq_Lo$<7kk`jEW|)ue^z5QLmzg52$gp>YrRf5l<0=lG z;=(o2Y>|*d&qKhwo0*vg;Vek#_kk>IQ9vbsfJApae!tU*HZptQWIH=aeY-y!2(^-I zL>P>cXL~)7mW945J)F$c(XVLrC?<Q=iuAE<K#)G5iB~`O=~G_5^um3dDczF7(R`iq zg%-`+=yx8`9<8-Zp&TFQZ}0jkQX!W0akC(oGgUwgeE?iU^~BoJDJLeHqU9~nz_M`& zpMDT#16S@sjCt;ky&L;i&>7-4Ci<ZEJ7=rC;cp@)EVQJblwAI4^;A>LE64Pq$06<7 zFu!Y@6-;dSrktJ<Y2H)F&l{2LJb-7s{9bp&*0pGoJ@ozcyU4^%KX=+g6>7o?E0x*G z&x<uI7u{jhIeDA=*>XdBgcnRD@X5x_k-axn`qkL}P9A9F5&9;m-Fvs-_6BFVabywR z_??{u+%a!g_S*~q7I$pc?w=c}mDnMF{*J9aEBI1n+Mgze&rVar`A=(ENL%f*BGO>) z72ZbKH^{ZG4#h^BgJ&N@_A?5L4By<nO<wLbn`5GhRGDqsd}(BwswFTkaX{X1*P=BV z>K8-j1Ey-aBgo?N>D(d!yI?T}kYgh>-4L3SFG3TTRa(2=)xJ`5^?4amiqj8aJCe=^ z(Hqh|)7rEBspD*ZVQuGI1BfEG^ie!H{m}@|q#XG)@<(T1!@BBGbAq@$_|N2No3%nK zvdIKnTWRAt&y0P1p?&FWajBy9kQi}vq7P{-=sH(`z|dkSl8TGG&{Vx6-h<73F=11K z)aBb9peJy!V^e9Sf%a+$1&hMOJYIVM8uWCV{SvdRov(L%mX-o=h5MWP)MY+lipTJB z3~S55)zB1Nd5on6yR=9sO$lwry5L}iO}YOYIW05wT%WTbI<d3I*-q$8Rm8y$xlkep z5G#{P2>(z$iOgM`ejWQkDAtL=(b>PC+0*M7nT)rDkW%AdQ~jRcFfE>Y+ql2_XN;)x z`boPuibds}IA{jcOMf_0=lC_MdAT(aZA8Ot=lXQ$jxnFIv!~rqFEDZtCUJatdnq)N zXYHI$vDK!=?kF;oVY}_kaTI<4COp_z(^TjYHt5rW^oCQ%h6>9B@0qMz{<AD0tw$K@ zzDrqgqQ-_^oOTj{1|UcfZ^q9zoMP%69^#kI3qngnVP71n3~h2$-T1&Q0=;vQg!C-| zJVza2L0ZSM=pYF3)MC9^3Q^bY&0L3Ab|4z&Bdk&RI^{o{9f?D_?$i^y*IX)8d~=&k z5?<>sgAUl2tJI_?tJE5k_Mn&Oe_9RHBwOkfzSpS#m+n1kLcQv@Dt9RNlVnkNLxf0M zqiJ0Pk>T`%qc<P>W5|&kYI;7$HBetXMl%ZZ$JRa~>)s;tI&z-DT_-mkq5<I2RuV?! zJMGzEd%@f&QL7y>(H<LSSWxnh<X$^tbwwP#;SwH5OBC-MOGd-VU0rq3QMk8!yF`G% zTrugqeT&3M)-S@95R~U3ECdhYC<==&0imdNbVX)z*CCj_9Y>M(=MsC9Y0w-2zB&6N zB>jrtqnzi8?C(=du34)Ti4HrU7kD<%a97mzIJARo!q#&|>0ein-}$<L_g`9~vv8tA z#~%{d%Kl{=!VP1>P=dhYuKB}F<gzUB^=D?~gg8NC_njehd9hxQ3q2hVLuKely=Tm< z<9sa}Dm{&l3h<s@VOl}O<}P%$3UqnK#-Rz^Pn(?mP^8o29B&UZ4v$i0I)4vDNHsIS zZ&2;cRTykI%tLd=nJ<Tgfm7r+2hyETt$uv(M3M7A=@MRAYmFW@;2y<rau$w23cl%= zpV?|vqc3PfIv7jVV|5~{IwrpxUu@i6#k%8$mI^bM?u<(EK;8vN(6brh`l)Db_{y5_ z6obw-@~tap>|Ll=-DUZ+>&<092)gE+YYc#d1U6=eTb4Wl6=U43zS3;OtmrNk^q|9Y zfuPG){7LuOv<47UWZp!-$F=#tKQ(u}$tVhcJ>u<duTts$M%Fm4!7uuDH73XfP#lEQ zOA(h*$X-ENXQF-|(2CznJT{Id1D){%z#J#h{K#O@Y*qJ691UW{S%O!ZzPrZnmU0AN z(fm*)IV+C1<yGp;nR^SH$-4+osaQ^Fb`&w;ZuFd#(8L=B$;3!S#4eXJPvKK#lbi`F z<HQ(#3sG01BiO<|n2Yup63;;3^+2mrM+!ue%s;*ZiSM~jn~?&(9p7CF^BO^gAGygH z%&GZ=2(q)7qPx*_*N-iR)SC7CQrUBxu8trIn`41G`ZA`C_tj2l+Nl?pid!gKG$T&& zYgoA}eRusI2}>_(o>>+!Wo35#N+<|wH*Z8F54sK);Oh+t$HKn3`gOk=)fB*hGo%)A zvF)=xZvPqC;MMjsDVW%zGYP(pzgXq7DEehuM}Z2?AqFQ&s3$WlN!Mul(nQatVy(xK z3p=Ak2+I7d_<bK5b2rF^9A_vN45V*H`v5X=mqRW7@h2m#Su}qY@O8?Z_t}5Nkh?sA ztx-l?&*OeLUVPxq20DqfY2NMZzf1FI3b8S`inMTEpL&BFhYc>ZQo4Miz>&@FuB^>* z(k#B7PY}rXsz}HHPFuZReuREwIa!<`fX_|LhViI41|}&(qlWDDgn|xLhNrXRzP;zf z(Kq-3t5EeL=jDh$3cKs~dXSqIX?$EPRAVK@^+<FReH>1`A^(4YIsaeS@qfia|4&CV w{~zr2e>c$@{QUn5u>D{6<g7OR@$re6r_%=EG`$#_K>kESMOV2-(dNVd0yDG8YybcN literal 0 HcmV?d00001 From 0593322cf0829c9e52cfa55c46b413951d9e366d Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:42:36 -0300 Subject: [PATCH 23/34] correccion de enlaces --- plugin.video.alfa/channels/animejl.py | 1 + 1 file changed, 1 insertion(+) diff --git a/plugin.video.alfa/channels/animejl.py b/plugin.video.alfa/channels/animejl.py index b02569b7..ff65b206 100644 --- a/plugin.video.alfa/channels/animejl.py +++ b/plugin.video.alfa/channels/animejl.py @@ -161,6 +161,7 @@ def findvideos(item): itemlist.extend(servertools.find_video_items(data=data)) for videoitem in itemlist: + videoitem.channel = item.channel videoitem.title = '[%s]' % videoitem.server.capitalize() return itemlist From 814a24c1ceba30bcd667c7f3f454148ef868ba1e Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:42:58 -0300 Subject: [PATCH 24/34] correccion para torrent --- plugin.video.alfa/channels/cinecalidad.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin.video.alfa/channels/cinecalidad.py b/plugin.video.alfa/channels/cinecalidad.py index 476b66fb..912048a7 100644 --- a/plugin.video.alfa/channels/cinecalidad.py +++ b/plugin.video.alfa/channels/cinecalidad.py @@ -324,7 +324,7 @@ def findvideos(item): url = server_url[server_id] + video_id + '.html' elif server_id == 'BitTorrent': import urllib - base_url = '%sprotect/v.php' % host + base_url = '%s/protect/v.php' % host post = {'i':video_id, 'title':item.title} post = urllib.urlencode(post) headers = {'Referer':item.url} From 78693bb5ece9d23e024bddc2a7532e298f59c605 Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:43:17 -0300 Subject: [PATCH 25/34] correccion para findvideos --- plugin.video.alfa/channels/locopelis.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) mode change 100755 => 100644 plugin.video.alfa/channels/locopelis.py diff --git a/plugin.video.alfa/channels/locopelis.py b/plugin.video.alfa/channels/locopelis.py old mode 100755 new mode 100644 index 7173701f..007c3680 --- a/plugin.video.alfa/channels/locopelis.py +++ b/plugin.video.alfa/channels/locopelis.py @@ -355,7 +355,7 @@ def findvideos(item): new_url = get_link(get_source(item.url)) new_url = get_link(get_source(new_url)) video_id = scrapertools.find_single_match(new_url, 'http.*?h=(\w+)') - new_url = '%s%s' % (host, 'playeropstream/api.php') + new_url = '%s%s' % (host.replace('.com','.tv'), 'playeropstream/api.php') post = {'h': video_id} post = urllib.urlencode(post) data = httptools.downloadpage(new_url, post=post).data From 1a632b7d86b3d24654f36985cea1ceb12610deb5 Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:43:52 -0300 Subject: [PATCH 26/34] agregado user agent para reproduccion --- plugin.video.alfa/channels/pelisipad.py | 2 ++ 1 file changed, 2 insertions(+) mode change 100755 => 100644 plugin.video.alfa/channels/pelisipad.py diff --git a/plugin.video.alfa/channels/pelisipad.py b/plugin.video.alfa/channels/pelisipad.py old mode 100755 new mode 100644 index 3d317d76..63034e92 --- a/plugin.video.alfa/channels/pelisipad.py +++ b/plugin.video.alfa/channels/pelisipad.py @@ -519,6 +519,7 @@ def findvideos(item): if item.video_urls: import random import base64 + item.video_urls.sort(key=lambda it: (it[1], random.random()), reverse=True) i = 0 actual_quality = "" @@ -534,6 +535,7 @@ def findvideos(item): title += " [COLOR green]Mirror %s[/COLOR] - %s" % (str(i + 1), item.fulltitle) url = vid % "%s" % base64.b64decode("dHQ9MTQ4MDE5MDQ1MSZtbT1NRzZkclhFand6QmVzbmxSMHNZYXhBJmJiPUUwb1dVVVgx" "WTBCQTdhWENpeU9paUE=") + url += '|User-Agent=%s' % httptools.get_user_agent itemlist.append(item.clone(title=title, action="play", url=url, video_urls="")) i += 1 From c78f02b3b26a4aca6f2b03b2bc8f16580f017367 Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:44:07 -0300 Subject: [PATCH 27/34] correccion para autoplay --- plugin.video.alfa/channels/pelisplusco.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/plugin.video.alfa/channels/pelisplusco.py b/plugin.video.alfa/channels/pelisplusco.py index 7b07366a..f02deda1 100644 --- a/plugin.video.alfa/channels/pelisplusco.py +++ b/plugin.video.alfa/channels/pelisplusco.py @@ -356,7 +356,7 @@ def get_links_by_language(item, data): patron = 'data-source=(.*?)data.*?srt=(.*?)data-iframe.*?Opci.*?<.*?hidden>[^\(]\((.*?)\)' matches = re.compile(patron, re.DOTALL).findall(data) if language in IDIOMAS: - language == IDIOMAS[language] + language = IDIOMAS[language] for url, sub, quality in matches: if 'http' not in url: @@ -403,7 +403,7 @@ def findvideos(item): i.quality) ) # Requerido para FilterTools - itemlist = filtertools.get_links(video_list, item, list_language) + video_list = filtertools.get_links(video_list, item, list_language) # Requerido para AutoPlay From 24f7a47fea177096c34f3a48b04b3d4f0017d709 Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:46:23 -0300 Subject: [PATCH 28/34] Modificado para utilizar user agent global --- plugin.video.alfa/channels/tvvip.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin.video.alfa/channels/tvvip.py b/plugin.video.alfa/channels/tvvip.py index 36b7ee4c..5045272a 100644 --- a/plugin.video.alfa/channels/tvvip.py +++ b/plugin.video.alfa/channels/tvvip.py @@ -620,7 +620,7 @@ def play(item): data['a']['tt']) + \ "&mm=" + data['a']['mm'] + "&bb=" + data['a']['bb'] - url += "|User-Agent=Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Mobile Safari/537.36" + url += "|User-Agent=%s" % httptools.get_user_agent itemlist.append(item.clone(action="play", server="directo", url=url, folder=False)) From a6eeaf333d19d5590aed8dd4175add9e65144c31 Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:47:09 -0300 Subject: [PATCH 29/34] Correccion de enlaces + autoplay --- .../channels/ultrapeliculashd.json | 14 ++ .../channels/ultrapeliculashd.py | 139 ++++++++++++------ 2 files changed, 107 insertions(+), 46 deletions(-) mode change 100755 => 100644 plugin.video.alfa/channels/ultrapeliculashd.json diff --git a/plugin.video.alfa/channels/ultrapeliculashd.json b/plugin.video.alfa/channels/ultrapeliculashd.json old mode 100755 new mode 100644 index 03c48a23..84ea556a --- a/plugin.video.alfa/channels/ultrapeliculashd.json +++ b/plugin.video.alfa/channels/ultrapeliculashd.json @@ -19,6 +19,20 @@ "enabled": true, "visible": true }, + { + "id": "filter_languages", + "type": "list", + "label": "Mostrar enlaces en idioma...", + "default": 0, + "enabled": true, + "visible": true, + "lvalues": [ + "No filtrar", + "LAT", + "CAST", + "VOSE" + ] + }, { "id": "include_in_newest_latino", "type": "bool", diff --git a/plugin.video.alfa/channels/ultrapeliculashd.py b/plugin.video.alfa/channels/ultrapeliculashd.py index 8c98653b..b847f081 100644 --- a/plugin.video.alfa/channels/ultrapeliculashd.py +++ b/plugin.video.alfa/channels/ultrapeliculashd.py @@ -8,6 +8,7 @@ from core import servertools from core import jsontools from core import tmdb from core.item import Item +from channels import filtertools, autoplay from platformcode import config, logger host = 'http://www.ultrapeliculashd.com' @@ -63,39 +64,51 @@ tcalidad = {'1080P': 'https://s21.postimg.cc/4h1s0t1wn/hd1080.png', '720P': 'https://s12.postimg.cc/lthu7v4q5/hd720.png', "HD": "https://s27.postimg.cc/m2dhhkrur/image.png"} +IDIOMAS = {'Latino': 'LAT', 'Español': 'CAST', 'SUB':'VOSE'} +list_language = IDIOMAS.values() +list_quality = ['default', '1080p'] +list_servers = ['openload','directo'] + +__comprueba_enlaces__ = config.get_setting('comprueba_enlaces', 'ultrapeliculashd') +__comprueba_enlaces_num__ = config.get_setting('comprueba_enlaces_num', 'ultrapeliculashd') + def mainlist(item): logger.info() + autoplay.init(item.channel, list_servers, list_quality) + itemlist = [] - itemlist.append(item.clone(title="Todas", - action="lista", - thumbnail='https://s18.postimg.cc/fwvaeo6qh/todas.png', - fanart='https://s18.postimg.cc/fwvaeo6qh/todas.png', - url=host + '/movies/' - )) + itemlist.append(Item(channel=item.channel, title="Todas", + action="lista", + thumbnail='https://s18.postimg.cc/fwvaeo6qh/todas.png', + fanart='https://s18.postimg.cc/fwvaeo6qh/todas.png', + url=host + '/movies/' + )) - itemlist.append(item.clone(title="Generos", - action="generos", - url=host, - thumbnail='https://s3.postimg.cc/5s9jg2wtf/generos.png', - fanart='https://s3.postimg.cc/5s9jg2wtf/generos.png' - )) + itemlist.append(Item(channel=item.channel, title="Generos", + action="generos", + url=host, + thumbnail='https://s3.postimg.cc/5s9jg2wtf/generos.png', + fanart='https://s3.postimg.cc/5s9jg2wtf/generos.png' + )) - itemlist.append(item.clone(title="Alfabetico", - action="seccion", - url=host, - thumbnail='https://s17.postimg.cc/fwi1y99en/a-z.png', - fanart='https://s17.postimg.cc/fwi1y99en/a-z.png', - extra='alfabetico' - )) + itemlist.append(Item(channel=item.channel, title="Alfabetico", + action="seccion", + url=host, + thumbnail='https://s17.postimg.cc/fwi1y99en/a-z.png', + fanart='https://s17.postimg.cc/fwi1y99en/a-z.png', + extra='alfabetico' + )) - itemlist.append(item.clone(title="Buscar", - action="search", - url=host + '/?s=', - thumbnail='https://s30.postimg.cc/pei7txpa9/buscar.png', - fanart='https://s30.postimg.cc/pei7txpa9/buscar.png' - )) + itemlist.append(Item(channel=item.channel, title="Buscar", + action="search", + url=host + '/?s=', + thumbnail='https://s30.postimg.cc/pei7txpa9/buscar.png', + fanart='https://s30.postimg.cc/pei7txpa9/buscar.png' + )) + + autoplay.show_option(item.channel, itemlist) return itemlist @@ -160,13 +173,13 @@ def generos(item): title = scrapedtitle url = scrapedurl if scrapedtitle not in ['PRÓXIMAMENTE', 'EN CINE']: - itemlist.append(item.clone(action="lista", - title=title, - fulltitle=item.title, - url=url, - thumbnail=thumbnail, - fanart=fanart - )) + itemlist.append(Item(channel=item.channel, action="lista", + title=title, + fulltitle=item.title, + url=url, + thumbnail=thumbnail, + fanart=fanart + )) return itemlist @@ -209,15 +222,33 @@ def alpha(item): def findvideos(item): + from lib import jsunpack logger.info() itemlist = [] data = httptools.downloadpage(item.url).data data = re.sub(r'"|\n|\r|\t| |<br>|\s{2,}', "", data) - patron = '<iframe.*?rptss src=(.*?) (?:width.*?|frameborder.*?) allowfullscreen><\/iframe>' + patron = '<div id=(option.*?) class=play.*?<iframe.*?' + patron += 'rptss src=(.*?) (?:width.*?|frameborder.*?) allowfullscreen><\/iframe>' matches = re.compile(patron, re.DOTALL).findall(data) - for video_url in matches: - if 'stream' in video_url and 'streamango' not in video_url: + for option, video_url in matches: + language = scrapertools.find_single_match(data, '#%s>.*?-->(.*?)(?:\s|<)' % option) + if 'sub' in language.lower(): + language = 'SUB' + language = IDIOMAS[language] + if 'ultrapeliculashd' in video_url: + new_data = httptools.downloadpage(video_url).data + new_data = re.sub(r'"|\n|\r|\t| |<br>|\s{2,}', "", new_data) + if 'drive' not in video_url: + quality= '1080p' + packed = scrapertools.find_single_match(new_data, '<script>(eval\(.*?)eval') + unpacked = jsunpack.unpack(packed) + url = scrapertools.find_single_match(unpacked, 'file:(http.?:.*?)\}') + else: + quality= '1080p' + url = scrapertools.find_single_match(new_data, '</div><iframe src=([^\s]+) webkitallowfullscreen') + + elif 'stream' in video_url and 'streamango' not in video_url: data = httptools.downloadpage('https:'+video_url).data if not 'iframe' in video_url: new_url=scrapertools.find_single_match(data, 'iframe src="(.*?)"') @@ -233,26 +264,42 @@ def findvideos(item): url = url.replace('download', 'preview')+headers_string sub = scrapertools.find_single_match(new_data, 'file:.*?"(.*?srt)"') - new_item = (Item(title=item.title, url=url, quality=quality, subtitle=sub, server='directo')) + new_item = (Item(title=item.title, url=url, quality=quality, subtitle=sub, server='directo', + language = language)) itemlist.append(new_item) + else: - itemlist.extend(servertools.find_video_items(data=video_url)) + url = video_url + quality = 'default' - for videoitem in itemlist: - videoitem.channel = item.channel - videoitem.action = 'play' - videoitem.thumbnail = item.thumbnail - videoitem.infoLabels = item.infoLabels - videoitem.title = item.contentTitle + ' (' + videoitem.server + ')' - if 'youtube' in videoitem.url: - videoitem.title = '[COLOR orange]Trailer en Youtube[/COLOR]' + if not config.get_setting("unify"): + title = ' [%s] [%s]' % (quality, language) + else: + title = '' - itemlist = servertools.get_servers_itemlist(itemlist) + new_item = (Item(channel=item.channel, title='%s'+title, url=url, action='play', quality=quality, + language=language, infoLabels=item.infoLabels)) + itemlist.append(new_item) + + + itemlist = servertools.get_servers_itemlist(itemlist, lambda i: i.title % i.server.capitalize()) + + if __comprueba_enlaces__: + itemlist = servertools.check_list_links(itemlist, __comprueba_enlaces_num__) + + # Requerido para FilterTools + + itemlist = filtertools.get_links(itemlist, item, list_language) + + # Requerido para AutoPlay + + autoplay.start(itemlist, item) if config.get_videolibrary_support() and len(itemlist) > 0 and item.extra != 'findvideos': itemlist.append( Item(channel=item.channel, title='[COLOR yellow]Añadir esta pelicula a la videoteca[/COLOR]', url=item.url, action="add_pelicula_to_library", extra="findvideos", contentTitle=item.contentTitle)) + return itemlist From 247e29a573c3d343fe68efa7213f325420ad126f Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:47:35 -0300 Subject: [PATCH 30/34] Correccion en la deteccion de idiomas --- plugin.video.alfa/channels/wikiseries.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugin.video.alfa/channels/wikiseries.py b/plugin.video.alfa/channels/wikiseries.py index 5c97a326..15240f9d 100644 --- a/plugin.video.alfa/channels/wikiseries.py +++ b/plugin.video.alfa/channels/wikiseries.py @@ -221,7 +221,7 @@ def findvideos(item): language = '' if 'latino' in link.lower(): language='Latino' - elif 'español' in link.lower(): + elif 'espaÑol' in link.lower(): language = 'Español' elif 'subtitulado' in link.lower(): language = 'VOSE' From 046241797a0e70d2ac577b087e3df11596b5ab83 Mon Sep 17 00:00:00 2001 From: Unknown <Delta_minion@protonmail.com> Date: Wed, 12 Sep 2018 16:47:59 -0300 Subject: [PATCH 31/34] nuevo metodo para obtener el user agent global --- plugin.video.alfa/core/httptools.py | 3 +++ 1 file changed, 3 insertions(+) mode change 100755 => 100644 plugin.video.alfa/core/httptools.py diff --git a/plugin.video.alfa/core/httptools.py b/plugin.video.alfa/core/httptools.py old mode 100755 new mode 100644 index 5f2f2355..bc20a2a1 --- a/plugin.video.alfa/core/httptools.py +++ b/plugin.video.alfa/core/httptools.py @@ -56,6 +56,9 @@ default_headers["Accept-Encoding"] = "gzip" HTTPTOOLS_DEFAULT_DOWNLOAD_TIMEOUT = config.get_setting('httptools_timeout', default=15) if HTTPTOOLS_DEFAULT_DOWNLOAD_TIMEOUT == 0: HTTPTOOLS_DEFAULT_DOWNLOAD_TIMEOUT = None +def get_user_agent(): + # Devuelve el user agent global para ser utilizado cuando es necesario para la url. + return default_headers["User-Agent"] def get_url_headers(url): domain_cookies = cj._cookies.get("." + urlparse.urlparse(url)[1], {}).get("/", {}) From 2a4fdcd095c8099a0ce2088caecf1fcb71575670 Mon Sep 17 00:00:00 2001 From: Kingbox <37674310+lopezvg@users.noreply.github.com> Date: Wed, 12 Sep 2018 21:51:20 +0200 Subject: [PATCH 32/34] =?UTF-8?q?Kodi=2018:=20correcci=C3=B3n=20de=20compa?= =?UTF-8?q?tibilidad=20con=20clientes=20Torrent?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Evita cuelgues y cancelaciones cuando se reproducen vídeos desde una pantalla convencional (no emergente) --- .../platformcode/platformtools.py | 36 +++++++++++-------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/plugin.video.alfa/platformcode/platformtools.py b/plugin.video.alfa/platformcode/platformtools.py index f0713eaa..0f7503d8 100644 --- a/plugin.video.alfa/platformcode/platformtools.py +++ b/plugin.video.alfa/platformcode/platformtools.py @@ -1044,6 +1044,8 @@ def torrent_client_installed(show_tuple=False): def play_torrent(item, xlistitem, mediaurl): logger.info() + import time + # Opciones disponibles para Reproducir torrents torrent_options = list() torrent_options.append(["Cliente interno (necesario libtorrent)"]) @@ -1066,28 +1068,32 @@ def play_torrent(item, xlistitem, mediaurl): # Plugins externos if seleccion > 1: + + #### Compatibilidad con Kodi 18: evita cuelgues/cancelaciones cuando el .torrent se lanza desde pantalla convencional + if xbmc.getCondVisibility('Window.IsMedia'): + xbmcplugin.setResolvedUrl(int(sys.argv[1]), False, xlistitem) #Preparamos el entorno para evutar error Kod1 18 + time.sleep(1) #Dejamos que se ejecute + mediaurl = urllib.quote_plus(item.url) if ("quasar" in torrent_options[seleccion][1] or "elementum" in torrent_options[seleccion][1]) and item.infoLabels['tmdb_id']: #Llamada con más parámetros para completar el título if item.contentType == 'episode' and "elementum" not in torrent_options[seleccion][1]: mediaurl += "&episode=%s&library=&season=%s&show=%s&tmdb=%s&type=episode" % (item.infoLabels['episode'], item.infoLabels['season'], item.infoLabels['tmdb_id'], item.infoLabels['tmdb_id']) elif item.contentType == 'movie': mediaurl += "&library=&tmdb=%s&type=movie" % (item.infoLabels['tmdb_id']) - xbmc.executebuiltin("PlayMedia(" + torrent_options[seleccion][1] % mediaurl + ")") - if "quasar" in torrent_options[seleccion][1] or "elementum" in torrent_options[seleccion][1]: #Seleccionamos que clientes torrent soportamos - if item.strm_path: #Sólo si es de Videoteca - import time - time_limit = time.time() + 150 #Marcamos el timepo máx. de buffering - while not is_playing() and time.time() < time_limit: #Esperamos mientra buffera - time.sleep(5) #Repetimos cada intervalo - #logger.debug(str(time_limit)) - - if is_playing(): #Ha terminado de bufferar o ha cancelado - from platformcode import xbmc_videolibrary - xbmc_videolibrary.mark_auto_as_watched(item) #Marcamos como visto al terminar - #logger.debug("Llamado el marcado") - #else: - #logger.debug("Video cancelado o timeout") + xbmc.executebuiltin("PlayMedia(" + torrent_options[seleccion][1] % mediaurl + ")") + + #Seleccionamos que clientes torrent soportamos para el marcado de vídeos vistos + if "quasar" in torrent_options[seleccion][1] or "elementum" in torrent_options[seleccion][1]: + time_limit = time.time() + 150 #Marcamos el timepo máx. de buffering + while not is_playing() and time.time() < time_limit: #Esperamos mientra buffera + time.sleep(5) #Repetimos cada intervalo + #logger.debug(str(time_limit)) + + if item.strm_path and is_playing(): #Sólo si es de Videoteca + from platformcode import xbmc_videolibrary + xbmc_videolibrary.mark_auto_as_watched(item) #Marcamos como visto al terminar + #logger.debug("Llamado el marcado") if seleccion == 1: from platformcode import mct From b881e8c9e7f891098394e358d83d4677f175dd85 Mon Sep 17 00:00:00 2001 From: Intel1 <luisriverap@hotmail.com> Date: Wed, 12 Sep 2018 16:06:57 -0500 Subject: [PATCH 33/34] v2.7.4 --- plugin.video.alfa/addon.xml | 16 ++++++++-------- .../language/Spanish (Argentina)/strings.po | 2 +- .../language/Spanish (Mexico)/strings.po | 2 +- .../resources/language/Spanish/strings.po | 3 ++- 4 files changed, 12 insertions(+), 11 deletions(-) diff --git a/plugin.video.alfa/addon.xml b/plugin.video.alfa/addon.xml index dde43615..8bb08a24 100755 --- a/plugin.video.alfa/addon.xml +++ b/plugin.video.alfa/addon.xml @@ -1,5 +1,5 @@ <?xml version="1.0" encoding="UTF-8" standalone="yes"?> -<addon id="plugin.video.alfa" name="Alfa" version="2.7.3" provider-name="Alfa Addon"> +<addon id="plugin.video.alfa" name="Alfa" version="2.7.4" provider-name="Alfa Addon"> <requires> <import addon="xbmc.python" version="2.1.0"/> <import addon="script.module.libtorrent" optional="true"/> @@ -19,17 +19,17 @@ </assets> <news>[B]Estos son los cambios para esta versión:[/B] [COLOR green][B]Canales agregados y arreglos[/B][/COLOR] - ¤ allcalidad ¤ cinecalidad - ¤ repelis ¤ cumlouder - ¤ porntrex ¤ crunchyroll - ¤ pedropolis ¤ pepecine + ¤ repelis ¤ thevid + ¤ vivio ¤ danimados + ¤ sipeliculas ¤ cinecalidad + ¤ locopelis ¤ pelisipad ¤ divxtotal ¤ elitetorrent ¤ estrenosgo ¤ grantorrent ¤ mejortorrent1 ¤ newpct1 - ¤ danimados ¤ fanpelis - ¤ repelis + ¤ tvvip ¤ zonatorrent + ¤ maxipelis24 ¤ arreglos internos - ¤ Agradecimientos a @angedam, @chivmalev, @alaquepasa por colaborar en ésta versión + ¤ Agradecimientos a @angedam y @chivmalev por colaborar en ésta versión </news> <description lang="es">Navega con Kodi por páginas web para ver sus videos de manera fácil.</description> diff --git a/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po b/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po index 677d30a4..7b7cb310 100644 --- a/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po +++ b/plugin.video.alfa/resources/language/Spanish (Argentina)/strings.po @@ -4793,7 +4793,7 @@ msgstr "Verificación de los contadores de vídeos vistos/no vistos (desmarcar p msgctxt "#70527" msgid "My links" -msgstr 'Mis enlaces' +msgstr "Mis enlaces" msgctxt "#70528" msgid "Default folder" diff --git a/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po b/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po index 677d30a4..7b7cb310 100644 --- a/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po +++ b/plugin.video.alfa/resources/language/Spanish (Mexico)/strings.po @@ -4793,7 +4793,7 @@ msgstr "Verificación de los contadores de vídeos vistos/no vistos (desmarcar p msgctxt "#70527" msgid "My links" -msgstr 'Mis enlaces' +msgstr "Mis enlaces" msgctxt "#70528" msgid "Default folder" diff --git a/plugin.video.alfa/resources/language/Spanish/strings.po b/plugin.video.alfa/resources/language/Spanish/strings.po index ab09763a..7b7cb310 100644 --- a/plugin.video.alfa/resources/language/Spanish/strings.po +++ b/plugin.video.alfa/resources/language/Spanish/strings.po @@ -4793,7 +4793,7 @@ msgstr "Verificación de los contadores de vídeos vistos/no vistos (desmarcar p msgctxt "#70527" msgid "My links" -msgstr 'Mis enlaces' +msgstr "Mis enlaces" msgctxt "#70528" msgid "Default folder" @@ -4938,3 +4938,4 @@ msgstr "Buscar Similares" + From a5a6f55a1b494c858231b21f90d6989cead3416d Mon Sep 17 00:00:00 2001 From: Intel1 <luisriverap@hotmail.com> Date: Wed, 12 Sep 2018 16:18:20 -0500 Subject: [PATCH 34/34] v2.7.4 --- plugin.video.alfa/addon.xml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/plugin.video.alfa/addon.xml b/plugin.video.alfa/addon.xml index 8bb08a24..82140bc8 100755 --- a/plugin.video.alfa/addon.xml +++ b/plugin.video.alfa/addon.xml @@ -20,14 +20,14 @@ <news>[B]Estos son los cambios para esta versión:[/B] [COLOR green][B]Canales agregados y arreglos[/B][/COLOR] ¤ repelis ¤ thevid - ¤ vivio ¤ danimados + ¤ vevio ¤ danimados ¤ sipeliculas ¤ cinecalidad ¤ locopelis ¤ pelisipad ¤ divxtotal ¤ elitetorrent ¤ estrenosgo ¤ grantorrent ¤ mejortorrent1 ¤ newpct1 ¤ tvvip ¤ zonatorrent - ¤ maxipelis24 + ¤ maxipelis24 ¤ wikiseries ¤ arreglos internos ¤ Agradecimientos a @angedam y @chivmalev por colaborar en ésta versión