Whats up everybody!

I was a away for a long time, deep projects and shit, but I am back and will continue doing some course and more freebies.

Anyway, here is a brief domain crawler + email extractor I did with Node.JS using roboto library which is cool and easy. So here I go, step by step:

1.- Create a dir.
2.- Go inside of it.
3.- You need to install roboto and htmlstrip-native with npm.
4.- Create a crawl.js file inside that folder you created.
5.- Paste the source code on it:

Then just run the command using node like:
node crawler.js domain.com

Thats it, it will create a domain.com.txt with all the emails.

Your console will look like this:

And emails grabbed like this:


Obviously change domain.com by any domain you want to crawl and get emails.

IRC was the best 10 years ago, now other softwares have eclipsed, but many savvy people keep using it for proper communication with special individuals. So, if you never heard or configured eggdrops, this is something similar.

So first you will need to install NodeJS, and I will consider that you already have it, then you should have NPM also installed, then just install the irc library:

npm install irc

Once you have that, you need to specify that you are using this module and then set the configure on connecting to know which channels the bot will join automatically, and then set up the port, which is not a regular port for IRC but you need to set something more stable for a bot.

So as you see, first we set up the server, then the nick, and then we open encapsulation to add the channels, the port and a mode where we can see what is going on on your console.

NOTE: You might get some errors when connecting, but try again and again. When you are able to connect, no matter if you sleep your localhost, it will continue connecting. Also this is for leaving it working on a VPS or shell so you have a always online bot.

But now that you have your bot online, what’s next? You need to know wtf is going on in the channels that you join or the PM you receive, and for that we have listeners.

Before getting into this I will explain the functions we have available so we can use them when we get events on the listeners:

So now that you know the commands to use on the Events, we now are listing the Listeners for this stuff:

 

Hey whatsup fellas. After opening the courses and scripts sections im updating this blog with something new and different. How many times you wanted to make a private gallery from many different tumblr profiles to watch offline? Maybe zero, but with this knowledge you will be able to do it with just 1 line of code!

So what it is going to do, is using curl to mass scrape and paginate a tumblr profile, so you will get a list of URLs that will be processed with a while loop inside the curl, and then saving it on the folder you are running this command. But first…

You might need to install cURL in your server, dont worry is easy:

And if this doesnt works, try doing an apt-get update and then:

Then just like that, type: curl -h  to see the help, if it display now you have it installed.

This is the main command you have to type inside the folder where you want to put all the downloaded images:

curl http://concept-art.tumblr.com/page/[1-7] | grep -o 'src="[^"]*.[png-jpg]"' | cut -d\" -f2 | while read l; do curl "$l" -o "${l##*/}"; done

Lets explain the code a little bit: the curl command is requesting the URL http://concept-art.tumblr.com/page/1 starting with page 1, then when we put in [ ] and separate a bigger number at the right with a dash – , we will be able to request multiple pages doing a pagination. Then when you add | means that it is separating the commands to run after, so grep is going to do a search for src attributes and only for those with termination on [png-jpg] or you can add more doing -gif inside the [ ]. Then the cut  command will collect the names, but what it is going to make download eveything will be the while loop, doing a curl on the found urls for the png or jpg images.

So make sure to add the while loop to the end for downloading, if you only want to see the images grabbed using the following command:

curl http://concept-art.tumblr.com/page/[1-7] | grep -o 'src="[^"]*.[png-jpg]"' | cut -d\" -f2

Here is a video that I recorded in order to show you how to do it:

Hey fellas, sorry for being absent for a long time, mainly it was lots of work on other projects.

In this post I am going to teach you how to screen scrape using NodeJS and JQuery (cheerio). Its relatively easy, here is the code:

 

It’s a final decision.. I don’t give a fuck what you say but I choose to have fun, and by fun I mean posting unrelated topics about different stuff related to this coding niche. Funny stuff and shit.

Why? Because…

So get over it.. You’ll have fun and I start this with this great video from a server OP normal day job on a datacenter in the late 90’s.

This is a whole series created by Josh Weinberg

Hey fellas, I’ve got this little exercise I did to do a mass url shortener using TinyURL API as an example, also I added a big list of API’s for shortening URLs like Bit.ly, Tiny.cc and more.

You might not have any use for this, but its good for education, we didn’t use any cURL because the API is opened and with a simple file_get_contents we can see the result when we input the url to the main gate of this API, check it out:

So with this example you can create your own functions either using file_get_contents for open API’s or cURL making POST request with proper credentials to get the output.

So here is the promised list of URL shortener services, I don’t know if some of them might help to cloak or anything, you will have to try it:

  • bit.ly
  • goo.gl
  • tinyurl.com
  • is.gd
  • cli.gs
  • pic.gd    tweetphoto
  • DwarfURL.com
  • ow.ly
  • yfrog.com
  • migre.me
  • ff.im
  • tiny.cc
  • url4.eu
  • tr.im
  • twit.ac
  • su.pr
  • twurl.nl
  • snipurl.com
  • BudURL.com
  • short.to
  • ping.fm
  • Digg.com
  • post.ly
  • Just.as
  • .tk
  • bkite.com
  • snipr.com
  • flic.kr
  • loopt.us
  • doiop.com
  • twitthis.com
  • htxt.it
  • AltURL.com
  • RedirX.com
  • DigBig.com
  • short.ie
  • u.mavrev.com
  • kl.am
  • wp.me
  • u.nu
  • rubyurl.com
  • om.ly
  • linkbee.com
  • Yep.it
  • posted.at
  • xrl.us
  • metamark.net
  • sn.im
  • hurl.ws
  • eepurl.com
  • idek.net
  • urlpire.com
  • chilp.it
  • moourl.com
  • snurl.com
  • xr.com
  • lin.cr
  • EasyURI.com
  • zz.gd
  • ur1.ca
  • URL.ie
  • adjix.com
  • twurl.cc
  • s7y.us    shrinkify
  • EasyURL.net
  • atu.ca
  • sp2.ro
  • Profile.to
  • ub0.cc
  • minurl.fr
  • cort.as
  • fire.to
  • 2tu.us
  • twiturl.de
  • to.ly
  • BurnURL.com
  • nn.nf
  • clck.ru
  • notlong.com
  • thrdl.es
  • spedr.com
  • vl.am
  • miniurl.com
  • virl.com
  • PiURL.com
  • 1url.com
  • gri.ms
  • tr.my
  • Sharein.com
  • urlzen.com
  • fon.gs
  • Shrinkify.com
  • ri.ms
  • b23.ru
  • Fly2.ws
  • xrl.in
  • Fhurl.com
  • wipi.es
  • korta.nu
  • shortna.me
  • fa.b
  • WapURL.co.uk
  • urlcut.com
  • 6url.com
  • abbrr.com
  • SimURL.com
  • klck.me
  • x.se
  • 2big.at
  • url.co.uk
  • ewerl.com
  • inreply.to
  • TightURL.com
  • a.gg
  • tinytw.it
  • zi.pe
  • riz.gd
  • hex.io
  • fwd4.me
  • bacn.me
  • shrt.st
  • ln-s.ru
  • tiny.pl
  • o-x.fr
  • StartURL.com
  • jijr.com
  • shorl.com
  • icanhaz.com
  • updating.me
  • kissa.be
  • hellotxt.com
  • pnt.me
  • nsfw.in
  • xurl.jp
  • yweb.com
  • urlkiss.com
  • QLNK.net
  • w3t.org
  • lt.tl
  • twirl.at
  • zipmyurl.com
  • urlot.com
  • a.nf
  • hurl.me
  • URLHawk.com
  • Tnij.org
  • 4url.cc
  • firsturl.de
  • Hurl.it
  • sturly.com
  • shrinkster.com
  • ln-s.net
  • go2cut.com
  • liip.to
  • shw.me
  • XeeURL.com
  • liltext.com
  • lnk.gd
  • xzb.cc
  • linkbun.ch
  • href.in
  • urlbrief.com
  • 2ya.com
  • safe.mn
  • shrunkin.com
  • bloat.me
  • krunchd.com
  • minilien.com
  • ShortLinks.co.uk
  • qicute.com
  • rb6.me
  • urlx.ie
  • pd.am
  • go2.me
  • tinyarro.ws
  • tinyvid.io
  • lurl.no
  • ru.ly
  • lru.jp
  • rickroll.it
  • togoto.us
  • ClickMeter.com
  • hugeurl.com
  • tinyuri.ca
  • shrten.com
  • shorturl.com
  • Quip-Art.com
  • urlao.com
  • a2a.me
  • tcrn.ch
  • goshrink.com
  • DecentURL.com
  • decenturl.com
  • zi.ma
  • 1link.in
  • sharetabs.com
  • shoturl.us
  • fff.to
  • hover.com
  • lnk.in
  • jmp2.net
  • dy.fi
  • urlcover.com
  • 2pl.us
  • tweetburner.com
  • u6e.de
  • xaddr.com
  • gl.am
  • dfl8.me
  • go.9nl.com
  • gurl.es
  • C-O.IN
  • TraceURL.com
  • liurl.cn
  • MyURL.in
  • urlenco.de
  • ne1.net
  • buk.me
  • rsmonkey.com
  • cuturl.com
  • turo.us
  • sqrl.it
  • iterasi.net
  • tiny123.com
  • EsyURL.com
  • urlx.org
  • IsCool.net
  • twitterpan.com
  • GoWat.ch
  • poprl.com
  • njx.me

Try it yourself and let me know how it worked 😉

Im done by today I might come with other stuff later, Im on my PHP spree, I might change soon. So don’t question why I haven’t used other languages. I just feel like it.

 

cURL might be confusing for many, but it is matter of learning the basics on how to config the options, knowing which method (GET, POST, PUT) but..

Why in the hell would you use curl for? 

  • Requests to API’s sending parameters and stuff. Consume them easily.
  • Requests to websites, download and stuff.
  • Crawl websites parsing DOM and following links.
  • many more..

With this you will be able to easily do do cURL just using 1 line of code and pre-configured set up of cURL into a function and using switch in the function. So we can split to know which configuration needs to be done if GET, or which other with POST.

There are many guides on how to config cURL but there is one better than all guides:

http://php.net/manual/en/curl.examples-basic.php

But there are also plenty of wrappers built by many other coders, but now you understand how they create so easy wrappers, many using OOP and many others with functions and switch (which I like much more).

List of wrappers i found:

With that! you will be able to code like this lazy fat:

Why would you want to do that?
In the first place, if you are a frontend designer, you might light to see the html and css from the comfort of your set up. For frontend developers or webdesigners, GUI, UX, and all the guys in the visual worrying, this will be like the following images are for me:

In resume, ecstasy are those 3 divine creations of humanity… ok, to the point.

Then you would be able to reuse that code and mix with other downloaded.
What do you have to do in order to do this:
Install wget, if you are in OSX, use

brew install wget

or if you are in Linux Debian or others

sudo apt-get install wget

finderAfter you have installed you just need to type one command:

The -p will get you all the required elements to view the site correctly (css, images, etc). The -will change all links (to include those for CSS & images) to allow you to view the page offline as it appeared online.

Then you will get the full landing page and ready to be opened with Dreamweaver and start editing, you can copy that mate. Just don´t brag about it, and show some love at [email protected]

 

Hey wazzaaaaaAAA!

Well fuckers, we have something pretty cool today for all the music lovers, even we the bots love it, without it everything will be meaningless with lack of color in life.

To the point, what we are going to do:

  1. Automatically search for google results using a dork(explained below) for mp3.
  2. Grab the first 5 results from google.
  3. Download all mp3 files located on those urls.

google dork is a search command that will display pages with folders full of specific filetypes or more, so if you don’t know how to search you should do it this way:

“index of” + “mp3” + “radiohead” -html -htm -php

so this will look for folders (index of) that contains mp3 with the keyword radiohead and only display html, html and php pages.

The code is commented and well explained, if questions leave a comment.

The code is the same as we preview before in other posts, we have to define the search url using the dork, so we loop the first 5 results:

The point of this script is to download all the mp3 files you want from your favorite artist, don’t worry it will separate all the grabbed content into specific folder for each of the domain that was used to download them and respective folders, so you can navigate to each and select the final mp3 that will remain with you and erase the others.

What do you think? Ready to party?

robotdj

 

Tonight I was watching Mr. Robot chapter 4, season 2. And it remind me back the good old days where the IRC was above any other social network. People meet there in tons of channels to have chats and discussions. Also there were plenty of groups talking about many stuff, the best crews were the ones with coders.

So well, what do I like about IRC?, There were plenty of cool things back then in 1995, there were the amazing eggdrops that you programmed to respond to different messages. They also had TCL’s which mean addons/plugins that you could adapt to your bot, many were different cool games in group. Also there were PsyBNCs to be always online with your shell.

To run this just do the following:

mkdir bitchx
nano install_bitchx.sh

And then just paste this code:

Then you just have to:

chmod +x install_bitchx.sh
./install_bitchx.sh

And this will begin to install everything you need to run BitchX, then just type to run:

BitchX

By the way, did you know you can connect to Elliot’s IRC session using this page: http://irc.colo-solutions.net/