Network Scanning, you have the right to find the hole. Do you?

Answer this question: 

  • Do you have the right to scan networks that you do not have control over?

spambots.jpg

You might think at first "No" but then you think about all the rules and you might say,

"Well, if I scan your internet facing equipment for known issues on known ports 0 to 65535 I don't see a problem."

You might not see a problem today, seems since about 2005 scanning random IP address blocks has become a crazy way of doing research. 

Let me offer a short history lesson. 

Back in the day, you might have seen footer notes like, 

"This site is monitored 24/7 and all activity recorded. Unauthorized activity will be reported to the authorities". 

And then we had our DOD and GOV sites almost threatening to the point you just didn't even want to go online. 

Basically you scanned nothing unless you had expressed permission to scan.

Today for some reason scanning sites have jumped in numbers. Sure you might think they are good and how they scan looking for the bad guys is just about as efficient as numbering each granule of sand in the Gulf Shores. Sure you can do it but with the next wave that comes up on the beach that scanned and numbered granule of sand has just changed.

In my opinion these scanning sites do no real good when you balance the resources with the actual results.

To make this a good thing and not to be a "All Talk" type like so many online so called techies are I have a little challenge.

You need to know the background of what I had to do to protect my networks and what I did to find work to be able to understand why I am 100% against unauthorized block scanning.

spam_me.gifBlock scanning means I have more logs to read and when I see non-standard port scans I research to see who is doing them. This takes time, my time, my time is valuable more than yours in my opinion and when a scanner hits my site day after day after day I find it takes less time for me to block the CIDR block of the host. Doesn't bother me and my network, I host and offer services to clients so blocking other hosts only benefits me.

So when you setup to scan, be sure you use your client network, you are less likely to be blocked and you wont stand out. Actually the bots do this and it's very effective, so scanning my hosted services and claiming to look for botnets isn't really truthful if you look at a typical botnet.

I'm getting a bit ahead of myself, I was going to tell you how I scanned and how I found new clients back in the day.

Let's setup the simple process I took for every connection, I wrote the program and hosted it on my local servers as well as community shared servers. It's a simple but very long VB ASP script.

When you connect, I check your "Connection IP" against my internal "IP List" if you're not banned you're allowed access to the full site.

I developed this code to restrict access to some old school eCommerce sites. I used ARIN tables which were updated daily to restrict access by the country. I had one merchant that didn't sell overseas so they didn't allow any overseas connections to the site. Simple little look up and compare script that worked very well.

I thing started monitoring query strings (SQL Injection) and port connections. It was like having a honeypot server but not having to deal with the setup and administration.

One good example was Spam Botnets, when you came to the site, you had to answer a simple question before you could send a message. Scripts had difficulty answering this question and would just send the message. I had a couple of other scripts that did some checks and balances but in the end very few humans missed the answer.

If a bot was detected I had a program code named XCtM which stands for Xtreme Computer Tracking and Monitoring.
If you triggered the "Bot" function of the application it would do a quick Proxy Port Scan based on what was HOT for scanners back then.
If it found you were on a Proxy it would then record your actions and log your pattern.

It could have been a simple automated checkout credit card validation check to SEO spam. In the end, scripts had patterns and it was not that difficult to program a script to detect patterns.

The script also detected simple things like the Nimda Virus that infected IIS servers, it also detects XSS (Cross Site Scripting) and other typical bad scanner types of actions.

When the Nimda Virus hit it was easy to detect because the connecting IP was always related to an active IIS server, simple script was created to detect a specific code line in the HTML page and Bang! You found Nimda infected servers.
I had servers from the US Forestry Department to Microsoft field offices all the way to local businesses I could walk to.

That's what I used the "Collected Data" for, I called each business, offered my Nimda Virus Removal Service over the phone and made a few bucks that day. Most of them didn't even know their server was infected. Some had Norton AV that deleted the IIS pages instead of just removing the added line and virus.

Why is all this important?

It's because I didn't have to do any scans to "Random" IP blocks to gather up my information. I let the bots and viruses come to me.

xctm_bot_defender.gifNow, this website doesn't run the code, but I have a few that do, it's my live scan detect but not a true honeypot because I don't let you in I make you fight to get monitored then I check to see if I can get some data on your network.

It's like this, you come by and visit me doing things against my online policy and you think because you have an online policy that states you can do this I have to agree?

Scanning Blocks of IP's only gets your CIDR listed in blocked lists and is not in my opinion productive in any way.

Time, to read all your scans which I will list in a separate article. All I have to do is copy one 24 hour period of more than 100 scans from 1 scanning company to prove it's a waste of resources on my end and really bad for Hurricane Electric, Inc hosting and others.

By the way, weren't you located in Chicago 15 years or so ago? 

Now that I presented the issue and how I feel this issue should be handled let me detail the process of testing. 

Like I said, I use my WWW sites as my monitoring for bad bots and for the collecting of information. 

That means the bots come to me and I don't have to search them out. That saves me bandwidth and risk of scanning someones block like Murray's here and getting posted as a bad programmer. Ouch, let's make sure you think hard and long before you scan scan and scan random blocks. 

  • Monitor Incoming connections only

If you have managed www servers you know that pattern scanners, spammers, scammers take. Here's my funny yet they do it all the time patterns. 

  1. Lands on product page with product not costing more than $10.00.
  2. No referring website
  3. Adds product to shopping cart under 2 seconds from time they hit the IP to the time they were in the shopping cart.
  4. Processed the check-out.

This all happens in under 4 seconds total. I measured the average time a human takes to add a product and check-out, it's not 4 seconds. so that was a trigger. 

The spammers pattern was the same, hidden fields with "email" or "name" always worked. I never used standard Form Names in most of my code, sometimes I do, but when I know it's a spam bot magnet I add a few hidden fields. 

If a hidden field or a field is entered as if it was a simple CALL function I add the IP to my scan back list. 

Here's where Live Scanning works better than Random Block Scanning. 

When a Spammer hits, I at the detection time launch a return bot mail attempt on them. That often tells me what I need to know like if it was a proxy or not. The system on the IP might only be active for that one hit. (IP Jumping) or it might be part of a botnet chain that hit me over and over in the same minute, hour or day. 

  •  Scan for Proxy, Open Ports during the time the IP is connected to your site.

Now, the data crunching begins. 
While many think it's important to list scanned IPs online (University of Michigan) I feel it's kind of stupid to show what you scanned, when you scanned and who you scanned. What purpose does this public record hold? Who's paying for the storage and the handling? 

Again, in my opinion, wasted resources. Why publish every IP address that was scanned when you claim to only be interested in knowing who's on first with the botnet and what's on second with the open ports. 

If you published only negative findings your resources would be less and your effects to the online community would be better served. I would use it as a call list and offer to correct any issue found for the company or person. But that's not going to happen, I wonder if tuition would be lower if resources were not spent like this?

This brings me to my next tip.

  • Publish only discovered IPs and list what you discovered.

Let me tell you about a Pennsylvania botnet operator.

This person ran 7 computers and offered typical spam mail.

The beginning detection was normal, spam mail received then I would log the connection IP and a few other habits and list them publicly.

After about 4 weeks I started seeing a pattern with network. Similar to the Brazilian Scan every 2 weeks same IPs, Same Pattern Same Useragent. 

Long story short, I found a IP connection from the UK hit the site the the spam connection was within a second and this pattern was every day. I blocked the UK IP and no more scans. I listed the IP, next thing you know I have a connection from Comcast reading my report on the UK IP. Then the next day, same time, I get a different IP then the same IPs sending spam. 

Ok, so the person found their Control IP was discovered so they changed it, but the pattern didn't change, so I listed the new IP. Next thing I know is someone (Sunday's between 2:00 and 3:00 every Sunday) was reading the IP listed in my blocked reports. Again the next day a different IP for the controller. 

After the 3rd time I wrote a note and detailed the Botnet. That next Sunday the person read the IP blocked report with my full detailed note on the botnet structure and their comcast IP address dates and times. 

I never saw that botnet return to my site again. But I have see others with the same pattern. 

They aren't that smart and limitations of design makes it not impossible to identify. 

  • Don't go crazy with your scanning.

Before Whitehats were told they will go to jail if they did something that the government didn't like to anyone they would often read reports of Nigerian Scams and the like. 

My eCommerce days proved to provide more information about Nigeria than I would have every thought. I setup a special reporting section and it was like candy to those we once called whitehats. I guess they came, read the reports and did their thing. I never saw networks change that fast before proper mapping and documenting. 
Nigerian Scams often didn't use Proxy connections but they would use Sky UK Satellite very often and that was almost always connected to a local ISP that was legit. 

How did the whitehats handle this type of scammer? (Don't laugh)

  • Emailed the Network ISP Provider and sent a detailed report of the scam.

Believe it or not many ISP's would shut down a site or a network if you could prove they did something wrong like fraud. You know that old fashioned credit card fraud thing a few idiots do online. 

The script I wrote that would send block orders reports was code named "Xtreme PayBACK". For every 1 fraud attempt the system would schedule to send the registered Admin 5 emails. If bounced it would walk up the chain using data from ARIN or public sources and email the report until something was done. 

Out of all the reports, PayPal number one fastest to remove a scammers online site. 
I had a total of 7 banks that never responded and that was for a few 7 months of email notifications daily. Once a Phishing site was detected another script would check for a line or logo or link in the HTML page every morning, if it was still up the email was sent to the registered owner listed in ARIN. Shows you how much they cared about their customers. Regions was my top loser for 6 months back in 2005 I started banking with them and complaining about their SSL certificate not being connected to the home page login and pointed out the phishing site in South Korea that was using the unsecured home page. Funny, the guy/gal that was hosting the phishing site claimed to be a webmaster and would build you a business online. 

I'll stop here, I hope you had fun, this is only my introduction. I've been planning on making a come back with the XCtM project just to show those that scan blocks they have no real technical skills and those that run HoneyPots or Scripts are the true discovery professionals. 

Really, scanning the IP of my aunts home network while she is using her tablet to talk to her sister? Do you really think she's a botnet? She's 75+, uses an Android Tablet with no special spammy apps and connects to a secured email server. Ok, she could be a threat and I'm glad you scanned her, listed her IP, uploaded it to the University of Michigan  and listed her as not having open ports. But what is this Key Port 500? ha ha ha.... 

Maybe she's part of that "Granny Bot Gang" working as a Operator from a Tablet!!!

 

 

 

I hope you follow me on Google+ and follow my scripting processes to help make IT Admin jobs not so crazy at times. When you pull a threat report covering 10 hours (midnight to) and see out of 600+ alerts only 22 were NOT from one of the following scanning researcher companies. 

What really gets me is the way they scan, it has a pattern, but why do they do it this way? Because, they all came frm a hackers programming background not a defensive background like your typically IT person. 

If they needed to know if port 17/udp was open then why scan that port 11 times during a 10 hour period? Do you think it might just magically wake up? But wait, it's UDP.

Then it's the University of Michigan and it's part of collecting and logging publicly the scan results. It's not that it matters much but why if you're a collection site do you even scan out? 35 attempts to connect to a SSL port? Rearlly? You scanned my block, actually you do this so often I would think you question your own results. I do believe the trigger is when you get a report from one of your feeders listed below.

I do appreciate your concern with my network. Please take your time so I have more Admin Threat Reports to review. I can see why IT admin's ignore their reports. 600+ and only 22 that I needed to review and 2 that are possible work leads because of a server hacked. What benefit does your system give you?

Quick List from Threat Detected Logs on my little network.
3-20-2015 10 hours: 
Monitoring those that are scanning now totally 166 scans per day for 14 days. 
Must have a big backbone to be able to scan every ARIN in North American 166 times per day. 
It's also time consuming when I'm looking for Zeus and the $3,000,000 reward. Yes, that's 3 Million to find the creator of ZeuS. Never going to happen but it's like the Lotto, if you don't play you can't win. Could it be the people below are looking for the same?

Ports Scanned: (sample)

  • 17/udp
  • 1900/udp
  • 1433/ms-sql-m/udp
  • 5351/udp
  • 6379/tcp
  • 33434/udp
  • etc.
  • shodan.io
    • shodan.io 66.240.192.128 - 66.240.192.191 66.240.192.128/26

    • shodan.io 66.240.236.0 - 66.240.236.127 66.240.236.0/25

    • shodan.io 198.20.64.0 - 198.20.127.255 198.20.64.0/18

    • shodan.io 71.6.135.0 - 71.6.135.255 71.6.135.0/24

    • shodan.io 71.6.165.192 - 71.6.165.255 71.6.165.192/26

    • shodan.io 71.6.167.128 - 71.6.167.191 71.6.167.128/26

  • shadowserver.org
    • shadowserver.org 74.82.0.0 - 74.82.63.255 74.82.0.0/18

    • shadowserver.org 216.218.128.0 - 216.218.255.255 216.218.128.0/17

    • shadowserver.org 184.104.0.0 - 184.105.255.255 184.104.0.0/15

  • Rapid7
    • 71.6.216.32/27 71.6.128.0 - 71.6.255.255
  • Internap.com
    • 66.150.0.0/15 66.150.0.0 - 66.151.255.255
  • University of Michigan
    • umich.edu 141.212.122.0 - 141.212.122.255 141.212.122.0/24
    • add 141.212.121.0/24
    • researchscan167.eecs.umich.edu
    • When you follow the link above I want you to focus on how often they claim to scan.
    • Averages 144 connection scans per day to this server.

U of M statement:

"

Why am I receiving connection attempts from this machine?

This machine is part of an Internet-wide network survey being conducted by computer scientists at the University of Michigan. The survey involves making TCP connection attempts to large subsets of the public IP address space and analyzing the responses. We select addresses to contact at random, and each address receives only a very small number of connection attempts.

We do not attempt to guess passwords or access data that is not publicly visible at the address. The goal of this research is to better understand the global use of Internet protocols, including HTTPS, SSH, SMTP, POP3, IMAP, CWMP, MODBUS, and UPnP.

"

Adding the Stoner School Berkeley to the list. 

What seemed to be just a simple and very odd pattern is not truly a work of Buzz Kill. 
I really don't like blocking or adding specific monitoring to track schools but UMich and Berkeley seem to think scanning is proper. If you supply your findings to me I might allow it. But, because when I scan your internal networks and publish my findings you all get your panties in a bind. I don't do that anymore but keep it up and others not so nice might find your network servers very interesting. I'm just saying not everyone follows the rules and because you are a data collection center for network scans your databases hold information that could hold keys to access. Think you're doing a good job? Yes, like UMich just browse their database to find the next hacked network. 

IP: 169.229.0.0/16 (169.229.3.90)

  • researchscan0.eecs.berkeley.edu

Registered Domain: berkeley.edu

I hold my network data live or 90 days then I cycle it to my backups that are held for 1 year. After that it's just old news. 

I'm really liking the new Policy by FQDN that Watchguard added to the updated Fireware 11.10. It makes it so nice just creating a simple "Unwanted Guests" policy to block.
Sad thing is most of the "Unwanted Guests" trigger a half of dozen filters before even reaching the rule based access. Email me Berkeley if you want to practice and I'll let you do all the practice hacks you would like as long as I get to publish the results from your attempts.  

The important note I want you to get in your head is the actual number of times you all scan. This is a daily scan not something like you describe, scan and leave the site. The above groups come every day and all day hundreds of times scanning the same ports over and over from different IPs. It just shows they are reaching in the dark, never have I seen such a bad network scan setup. Wait, I have, from Spammers!

More

 

 xctechapiprojects.jpg

 

Network scanning companies are popping up more often than I can count. Seems every Researcher starts a scanner and states it's for discovery of bad things. It is very interesting that the researchers for the most part use the same methods as the attackers. So is their a difference?