So I am going to start coding my custom script so I can know how to do this properly in order to keep my personal tool for growth, this way I will include AI in this process.

This script is free and works to promote my next private script. If you are interested just subscribe to the Newsletter so I can notify you when I have it in production for a Lifetime price, including SetUp.

Way to go so here is the script:

import requests
import json

# Replace with your own access token
access_token = 'YOUR_ACCESS_TOKEN'

# Get list of users you follow
url = 'https://api.instagram.com/v1/users/self/follows?access_token=' + access_token
response = requests.get(url)
data = json.loads(response.text)
follows = data["data"]

# Get list of users who follow you
url = 'https://api.instagram.com/v1/users/self/followed-by?access_token=' + access_token
response = requests.get(url)
data = json.loads(response.text)
followers = data['data']

# Create sets for quick comparison
follows_set = set([f['username'] for f in follows])
followers_set = set([f['username'] for f in followers])

# Find users who don't follow you back
not_following_back = follows_set.difference(followers_set)

# Print the list of users who don't follow you back
print(not_following_back)

BTW the image Feature Image is from the Crazy MidJourney

Botting is a hobby, and automation is my passion. So this is a therapy to improve even better, so I will continue with this great journey.

With many upgrades in the industry, AI is doing amazing things. I will be jumping deeply into these topics and creating content for this side project.
In the meantime, I will continue with my educational project Cerebro Digital which is hard to monetize due to some content that won’t go viral as other entertainment topics.

I’ve created so far 2 SaaS, one is “completed” and the other is WIP.

  • LeadGen.tools
  • SocialWizard.app

The name of this project of SaaS tools to growth hack is linked to the Grow & Thrive project where I gather insights from customers for creating other projects. My main intention with this is to order everything into a structure and ecosystem of the union to give work to other students.

English is not my main language, but I got it under control. You will learn in realtime with me, you will see how I search for answers so you learn to find your own answers faster.

What will I start doing first?

Since I started with uBot courses inside the uBot community, I will expand into teaching how to create bots and use API’s at the same time. Will continue selling source codes to get things done to work into different orders.

Under these circumstances. Price on custom projects will need to be stopped. I will need to stop attending private and custom projects that I won’t publish here. I rather focus on making it private under courses + source codes, instead of working as a freelancer for only one guy or company. I’m done with that shit.

So about my plans for the future…

I will start digging into these topics:

  • AI and different libraries for Python like TensorFlow.
  • Elastic Cloud + Automation systems.
  • Image detection and classification.
  • Deep learning on data to improve answers.
  • Browser Sessions with different cookies preset in order to send commands and get back responses as an API.
  • I will also continue my IoT research but with microcomputers instead of microcontrollers such as ESP8266.

I’m particularly excited to work on my next SaaS project that will involve a device and sensors that can get connected to any Raspberry Pi and send data to the network to get insights into AI online.

But first things first, will finish SocialWizard.app and hire a team to help me sell more LeadGen.tools memberships and lifetime licenses. So if you are interested let me know.

What’s up guys, here I present you a way of scraping all of a user’s tweets into a CSV, you can use it into an AI or whatever you want. It’s your choice.

First, you will need to install the tweepy library of Python3

Then you need to get your API credentials by making an app on Twitter

#Twitter API credentials 
consumer_key = "" 
consumer_secret = "" 
access_key = "" 
access_secret = ""

Here is the full source:

Whats up everybody!

I was a away for a long time, deep projects and shit, but I am back and will continue doing some course and more freebies.

Anyway, here is a brief domain crawler + email extractor I did with Node.JS using roboto library which is cool and easy. So here I go, step by step:

1.- Create a dir.
2.- Go inside of it.
3.- You need to install roboto and htmlstrip-native with npm.
4.- Create a crawl.js file inside that folder you created.
5.- Paste the source code on it:

Then just run the command using node like:
node crawler.js domain.com

Thats it, it will create a domain.com.txt with all the emails.

Your console will look like this:

And emails grabbed like this:


Obviously change domain.com by any domain you want to crawl and get emails.

IRC was the best 10 years ago, now other softwares have eclipsed, but many savvy people keep using it for proper communication with special individuals. So, if you never heard or configured eggdrops, this is something similar.

So first you will need to install NodeJS, and I will consider that you already have it, then you should have NPM also installed, then just install the irc library:

npm install irc

Once you have that, you need to specify that you are using this module and then set the configure on connecting to know which channels the bot will join automatically, and then set up the port, which is not a regular port for IRC but you need to set something more stable for a bot.

var irc = require('irc');
var bot = new irc.Client('chat.freenode.net','w0bot', {
    channels: ['#botplace', '#w0b'],
    port: 8001,
    debug: true,
    userName: 'wizard', // on the host is like wizard@host/ip
    realName: 'Im a bot from Wizard of Bots ;)',  // real name on whois
    showErrors: true, 
    autoRejoin: true, // auto rejoin channel when kicked
    autoConnect: true, // persistence to connect
});

So as you see, first we set up the server, then the nick, and then we open encapsulation to add the channels, the port and a mode where we can see what is going on on your console.

NOTE: You might get some errors when connecting, but try again and again. When you are able to connect, no matter if you sleep your localhost, it will continue connecting. Also this is for leaving it working on a VPS or shell so you have a always online bot.

But now that you have your bot online, what’s next? You need to know wtf is going on in the channels that you join or the PM you receive, and for that we have listeners.

Before getting into this I will explain the functions we have available so we can use them when we get events on the listeners:

bot.join('#yourchannel yourpass'); // this will join a channel with a pass (or no pass)
bot.part('#yourchannel'); //part a channel
bot.say('#yourchannel', "I'm a bot for w0b!"); // send a message to a channel
bot.whois(nickname, function (whois) {
    console.log(whois); // you need this callback to log the results once it finished doing the whois
bot.notice(target, "your message for the notice"); //target is either a nickname or a channel
});
bot.send('MODE', '#yourchannel', '+o', 'yournick'); // Gives OP to yournick in #yourchannel

So now that you know the commands to use on the Events, we now are listing the Listeners for this stuff:

bot.addListener('pm', function (from, message) {
    console.log(from + ' => ME: ' + message); // when you get a PM you log into console
    bot.say(from, 'Hello I am a bot from Wizard of Bots '); // Also you can automatically respond to that message with the command say
});
bot.addListener('message#yourchannel', function (from, message) {
    console.log(from + ' => #yourchannel: ' + message); // if someone sends a message to a specific channel
});
bot.addListener('join', function(channel, who) {
    console.log('%s has joined %s', who, channel);
    bot.say(who, 'Hello and welcome to ' + channel); // When someone joins a channel automatically welcome him
});
bot.addListener('kick', function(channel, who, by, reason) {
    console.log('%s was kicked from %s by %s: %s', who, channel, by, reason); // when someone is kicked log into the console what happend.
});
bot.addListener('part', function(channel, who, reason) {
    console.log('%s has left %s: %s', who, channel, reason); // when someone part
    // you can also send a PM to this guy to convince to stay.
});
bot.addListener('message', function(from, to, message) {
    if(  message.indexOf('Know any good jokes?') > -1
      || message.indexOf('good joke') > -1
    ) {
        bot.say(to, 'Knock knock!');
    }
});  // and in like other eggdrops, if you say those words, it will answer Knock Knock

 

Hey whatsup fellas. After opening the courses and scripts sections im updating this blog with something new and different. How many times you wanted to make a private gallery from many different tumblr profiles to watch offline? Maybe zero, but with this knowledge you will be able to do it with just 1 line of code!

So what it is going to do, is using curl to mass scrape and paginate a tumblr profile, so you will get a list of URLs that will be processed with a while loop inside the curl, and then saving it on the folder you are running this command. But first…

You might need to install cURL in your server, dont worry is easy:

sudo apt-get install curl

And if this doesnt works, try doing an apt-get update and then:

sudo apt-get install libcurl3 php5-curl

Then just like that, type: curl -h  to see the help, if it display now you have it installed.

This is the main command you have to type inside the folder where you want to put all the downloaded images:

curl http://concept-art.tumblr.com/page/[1-7] | grep -o ‘src=”[^”]*.[png-jpg]”‘ | cut -d\” -f2 | while read l; do curl “$l” -o “${l##*/}”; done

Lets explain the code a little bit: the curl command is requesting the URL http://concept-art.tumblr.com/page/1 starting with page 1, then when we put in [ ] and separate a bigger number at the right with a dash – , we will be able to request multiple pages doing a pagination. Then when you add | means that it is separating the commands to run after, so grep is going to do a search for src attributes and only for those with termination on [png-jpg] or you can add more doing -gif inside the [ ]. Then the cut  command will collect the names, but what it is going to make download eveything will be the while loop, doing a curl on the found urls for the png or jpg images.

So make sure to add the while loop to the end for downloading, if you only want to see the images grabbed using the following command:

curl http://concept-art.tumblr.com/page/[1-7] | grep -o ‘src=”[^”]*.[png-jpg]”‘ | cut -d\” -f2

Here is a video that I recorded in order to show you how to do it:

Hey fellas, sorry for being absent for a long time, mainly it was lots of work on other projects.

In this post I am going to teach you how to screen scrape using NodeJS and JQuery (cheerio). Its relatively easy, here is the code:

var request = require('request'); // we need request library
var cheerio = require('cheerio'); // and cheerio library/ JQuery
// set some defaults
req = request.defaults({
  jar: true,                 // save cookies to jar
  rejectUnauthorized: false, 
  followAllRedirects: true   // allow redirections
});
// scrape the page
req.get({
    url: "http://www.whatsmyip.org/",
    headers: {
        'User-Agent': 'Google' // You can put the user-agent that you want
     }
  }, function(err, resp, body) {
  
  // load the html into cheerio
  var $ = cheerio.load(body);
  
  // get the data and output to console
  console.log( 'IP: ' + $('#ip').text() );  //scrape using CSS selector
  console.log( 'Host: ' + $('#hostname').text() );
  console.log( 'User-Agent: ' + $('#useragent').text() );
});

 

It’s a final decision.. I don’t give a fuck what you say but I choose to have fun, and by fun I mean posting unrelated topics about different stuff related to this coding niche. Funny stuff and shit.

Why? Because…

So get over it.. You’ll have fun and I start this with this great video from a server OP normal day job on a datacenter in the late 90’s.

This is a whole series created by Josh Weinberg

Hey fellas, I’ve got this little exercise I did to do a mass url shortener using TinyURL API as an example, also I added a big list of API’s for shortening URLs like Bit.ly, Tiny.cc and more.

You might not have any use for this, but its good for education, we didn’t use any cURL because the API is opened and with a simple file_get_contents we can see the result when we input the url to the main gate of this API, check it out:

<?php
// we first create the function to not repeat ourselves
function tinyurl($longUrl) {
	// We use the tiny_url API
	$short_url= file_get_contents('http://tinyurl.com/api-create.php?url=' . $longUrl);
	return $short_url; // an obviously return the data so you assign it to a variable
}
//we specify the file which is in same folder of this script
// make sure to paste line by line all the URLs in links.txt
$filename = 'links.txt';
// we open the file into variable
$links = file($filename);
$links_bucket = array(); // create our array before.
// we will iterate the $links variable which assigned to file()
foreach($links as $link) {
	// time to use the function that returns the shortened URL  
	$tinyURL = tinyurl($link);
	// and we push into the array that we will var_dump
	array_push($links_bucket, $tinyURL);
}

var_dump($links_bucket);

So with this example you can create your own functions either using file_get_contents for open API’s or cURL making POST request with proper credentials to get the output.

So here is the promised list of URL shortener services, I don’t know if some of them might help to cloak or anything, you will have to try it:

  • bit.ly
  • goo.gl
  • tinyurl.com
  • is.gd
  • cli.gs
  • pic.gd    tweetphoto
  • DwarfURL.com
  • ow.ly
  • yfrog.com
  • migre.me
  • ff.im
  • tiny.cc
  • url4.eu
  • tr.im
  • twit.ac
  • su.pr
  • twurl.nl
  • snipurl.com
  • BudURL.com
  • short.to
  • ping.fm
  • Digg.com
  • post.ly
  • Just.as
  • .tk
  • bkite.com
  • snipr.com
  • flic.kr
  • loopt.us
  • doiop.com
  • twitthis.com
  • htxt.it
  • AltURL.com
  • RedirX.com
  • DigBig.com
  • short.ie
  • u.mavrev.com
  • kl.am
  • wp.me
  • u.nu
  • rubyurl.com
  • om.ly
  • linkbee.com
  • Yep.it
  • posted.at
  • xrl.us
  • metamark.net
  • sn.im
  • hurl.ws
  • eepurl.com
  • idek.net
  • urlpire.com
  • chilp.it
  • moourl.com
  • snurl.com
  • xr.com
  • lin.cr
  • EasyURI.com
  • zz.gd
  • ur1.ca
  • URL.ie
  • adjix.com
  • twurl.cc
  • s7y.us    shrinkify
  • EasyURL.net
  • atu.ca
  • sp2.ro
  • Profile.to
  • ub0.cc
  • minurl.fr
  • cort.as
  • fire.to
  • 2tu.us
  • twiturl.de
  • to.ly
  • BurnURL.com
  • nn.nf
  • clck.ru
  • notlong.com
  • thrdl.es
  • spedr.com
  • vl.am
  • miniurl.com
  • virl.com
  • PiURL.com
  • 1url.com
  • gri.ms
  • tr.my
  • Sharein.com
  • urlzen.com
  • fon.gs
  • Shrinkify.com
  • ri.ms
  • b23.ru
  • Fly2.ws
  • xrl.in
  • Fhurl.com
  • wipi.es
  • korta.nu
  • shortna.me
  • fa.b
  • WapURL.co.uk
  • urlcut.com
  • 6url.com
  • abbrr.com
  • SimURL.com
  • klck.me
  • x.se
  • 2big.at
  • url.co.uk
  • ewerl.com
  • inreply.to
  • TightURL.com
  • a.gg
  • tinytw.it
  • zi.pe
  • riz.gd
  • hex.io
  • fwd4.me
  • bacn.me
  • shrt.st
  • ln-s.ru
  • tiny.pl
  • o-x.fr
  • StartURL.com
  • jijr.com
  • shorl.com
  • icanhaz.com
  • updating.me
  • kissa.be
  • hellotxt.com
  • pnt.me
  • nsfw.in
  • xurl.jp
  • yweb.com
  • urlkiss.com
  • QLNK.net
  • w3t.org
  • lt.tl
  • twirl.at
  • zipmyurl.com
  • urlot.com
  • a.nf
  • hurl.me
  • URLHawk.com
  • Tnij.org
  • 4url.cc
  • firsturl.de
  • Hurl.it
  • sturly.com
  • shrinkster.com
  • ln-s.net
  • go2cut.com
  • liip.to
  • shw.me
  • XeeURL.com
  • liltext.com
  • lnk.gd
  • xzb.cc
  • linkbun.ch
  • href.in
  • urlbrief.com
  • 2ya.com
  • safe.mn
  • shrunkin.com
  • bloat.me
  • krunchd.com
  • minilien.com
  • ShortLinks.co.uk
  • qicute.com
  • rb6.me
  • urlx.ie
  • pd.am
  • go2.me
  • tinyarro.ws
  • tinyvid.io
  • lurl.no
  • ru.ly
  • lru.jp
  • rickroll.it
  • togoto.us
  • ClickMeter.com
  • hugeurl.com
  • tinyuri.ca
  • shrten.com
  • shorturl.com
  • Quip-Art.com
  • urlao.com
  • a2a.me
  • tcrn.ch
  • goshrink.com
  • DecentURL.com
  • decenturl.com
  • zi.ma
  • 1link.in
  • sharetabs.com
  • shoturl.us
  • fff.to
  • hover.com
  • lnk.in
  • jmp2.net
  • dy.fi
  • urlcover.com
  • 2pl.us
  • tweetburner.com
  • u6e.de
  • xaddr.com
  • gl.am
  • dfl8.me
  • go.9nl.com
  • gurl.es
  • C-O.IN
  • TraceURL.com
  • liurl.cn
  • MyURL.in
  • urlenco.de
  • ne1.net
  • buk.me
  • rsmonkey.com
  • cuturl.com
  • turo.us
  • sqrl.it
  • iterasi.net
  • tiny123.com
  • EsyURL.com
  • urlx.org
  • IsCool.net
  • twitterpan.com
  • GoWat.ch
  • poprl.com
  • njx.me

Try it yourself and let me know how it worked 😉

Im done by today I might come with other stuff later, Im on my PHP spree, I might change soon. So don’t question why I haven’t used other languages. I just feel like it.

 

cURL might be confusing for many, but it is matter of learning the basics on how to config the options, knowing which method (GET, POST, PUT) but..

Why in the hell would you use curl for? 

  • Requests to API’s sending parameters and stuff. Consume them easily.
  • Requests to websites, download and stuff.
  • Crawl websites parsing DOM and following links.
  • many more..
// the best fuction so you stop saving time.
function curlwiz($uri, $method='GET', $data=null, $curl_headers=array(), $curl_options=array()) {
  // default curl options which will almost be static, you can modify if you want
  $default_curl_options = array(
    CURLOPT_SSL_VERIFYPEER => false,
    CURLOPT_HEADER => true,
    CURLOPT_RETURNTRANSFER => true,
    CURLOPT_TIMEOUT => 3,
  );
  // you can set the default headers into this array, usually you dont need them.
  $default_headers = array();

  // We need to trim and change into MAYUS the method passed
  $method = strtoupper(trim($method));
  $allowed_methods = array('GET', 'POST', 'PUT', 'DELETE'); // array with allowed methods. 

  if(!in_array($method, $allowed_methods)) // if the method from input is not in allowed_methods array, then throw an error.
    throw new \Exception("'$method' is not valid cURL HTTP method.");

  if(!empty($data) && !is_string($data))
    throw new \Exception("Invalid data for cURL request '$method $uri'");

  // init
  $curl = curl_init($uri);

  // apply default options
  curl_setopt_array($curl, $default_curl_options);

  // apply method specific options
  switch($method) {
    case 'GET':
      break;
    case 'POST':
      if(!is_string($data))
        throw new \Exception("Invalid data for cURL request '$method $uri'");
      curl_setopt($curl, CURLOPT_POST, true);
      curl_setopt($curl, CURLOPT_POSTFIELDS, $data);
      break;
    case 'PUT':
      if(!is_string($data))
        throw new \Exception("Invalid data for cURL request '$method $uri'");
      curl_setopt($curl, CURLOPT_CUSTOMREQUEST, $method);
      curl_setopt($curl, CURLOPT_POSTFIELDS, $data);
      break;
    case 'DELETE':
      curl_setopt($curl, CURLOPT_CUSTOMREQUEST, $method);
      break;
  }

  // apply user options
  curl_setopt_array($curl, $curl_options);

  // add headers
  curl_setopt($curl, CURLOPT_HTTPHEADER, array_merge($default_headers, $curl_headers));

  // parse result from curl
  $raw = rtrim(curl_exec($curl));
  //var_dump($raw);
  $lines = explode("\r\n", $raw); // we exploder curl response line by line
  var_dump($lines);
  $headers = array(); 
  $content = '';
  $write_content = false;
  if(count($lines) > 3) {
    foreach($lines as $h) {
      if($h == '')
        $write_content = true;
      else {
        if($write_content)
          $content .= $h."\n";
        else
          $headers[] = $h;
      }
    }
  }
  $error = curl_error($curl);

  curl_close($curl);

  // return
  return array(
    'raw' => $raw,
    'headers' => $headers,
    'content' => $content,
    'error' => $error
  );
}

curlwiz('http://facebook.com', 'GET');

With this you will be able to easily do do cURL just using 1 line of code and pre-configured set up of cURL into a function and using switch in the function. So we can split to know which configuration needs to be done if GET, or which other with POST.

There are many guides on how to config cURL but there is one better than all guides:

http://php.net/manual/en/curl.examples-basic.php

But there are also plenty of wrappers built by many other coders, but now you understand how they create so easy wrappers, many using OOP and many others with functions and switch (which I like much more).

List of wrappers i found:

With that! you will be able to code like this lazy fat: