0

Ancient “AI” in the Age of Advanced Adversaries

-

There has been a lot that’s being said about the use of AI in Cyber Security. This is for good reasons – people have said here and folks in information security (as we have called “cyber security” for decades now) have experienced first-hand. It’s only natural that already stretched InfoSec teams look at AI as the “saviour” to the skills / personnel gap to close it. Then again, there is a lot being said about companies selling products as “AI enabled” too.

But realistically speaking are there some things traditional organizations (“non-AI”) can do to actually do what many of these “AI enabled” products do? I wouldn’t have written this blog post now would I if the answer was anything but yes! 🙂

Let’s look at them:

  1. Anomaly Detection – this is age old! Almost all security tools that “alert” us on something are essentially using this. How well? That’s debatable. The kind of anomaly detection that I am talking about is simple (but different). For example, abnormal login attempts on your Internet-facing systems is anomaly detection. So is abnormality of DNS queries. Your Cloudtrail logs (in AWS) showing an inordinate spend on EC2 instances is also anomaly. A abnormally small amount of time spent between a git commit and a production deployment of that commit is also odd! Your SaaS or Okta bill being high or your APIs getting throttled (without any known changes) are all anomalies. The response time for these depends on whether or not you have been able to automate these anomalies. The day you automate these “known” anomalies you are already doing what many of these “AI enabled” products are doing today (after of course charging you an arm and leg!)
  2. UBA / User behavior analytics – a lot of products do that but the most simplistic things are reduction of logins / preventing logon from areas where you do not expect your users to originate from. This is “reduction” of attack surface. Is that foolproof? Hell no! Why? Generally, speaking adversaries do not attack systems from their home computers. Adversaries operate by using trampoline servers (sometimes layers of them) to send the attack from the “attacker controlled bots”. But it reduces your area of concentration. And then you can use UBA more effectively since you do know where your users are expected from at a macro level. To improve your “AI-ness” you can then add capabilities which are able to say not at a macro-level but on a per user level where that specific user is expected to originate from. And if it looks abnormal (or anomalous) then ask them to step up. There are numerous vendors in this space as well as products on the cheap which you could do. There are open source libraries that can also help you do that on the cheap. Again, something very expensive “AI enabled” products can do too.

I am sure there are many other things that as an organization one can start doing. Obviously, at the end of the day, every initiative takes resources and by no means are any of these simple but YMMV depending the size of your datasets, users, and organizations.

0

Using awk with bash variables

-

I wanted to use variables in a bash script’s for loop inside awk’s print statement.
Here’s an easy way to do it – enclose the bash variable within "’"

Here’s a sample scripts to take a list of IPs and do a DNS lookup and generate a CSV:


for ip in `cat ips.txt`
do
host $ip|grep -v NXDOMAIN|sed 's/\.$//g'|awk '{print "'"$ip"'"","$NF}'
done
3

Hakin9 Subscription

-

I have been a subscriber to this magazine’s electronic edition since the past year. However, they’ve only sent me one copy of the magazine till date. The cost of the yearly subscription was $79 or something which makes it an extremely expensive magazine…1 issue for $79…that’s ridiculous!
All my efforts to contact monika.drygulska@hakin9.org or marta.ogonek@hakin9.org have been futile! I would like to discourage anyone who pays for this.
Has anyone else experienced this kind of sloppy service with Hakin9?
Update 06/23/2009:
Hakin9 finally contacted me, after I emailed them (again) based on Chris John Riley’s suggestion. They provided me with the missing issues. Better late than never Hakin9! Thanks!

5

Cell SDK on PS3 with Yellow dog linux 5.0

-

People tend to think that gone are the days when the “RPM Hell” used to exist. We have yum, aptitude and what not! If you install linux on a PS3, I’d like to bring you back to reality, especially like me when you have yellowdog 5.0 installed on a first gen PS3.
What is interesting is that all these package managers rely on repositories in /etc/yum.repos.d/*.
If you do not have the good repositories then you can kiss goodbye to installing the Cell Broadband Engine SDK provided by IBM. This SDK has the spu-gcc, spu-g++ which is the right set of compilers if you want to use the 1 master processor (Power Processor Element – PPE) and the other six SPE (Synergistic Processor Elements – SPEs) …think of these as slaves. You might wonder where’s the 7th of the cell processor, well, you cannot access it because it is used internally by the PS3 for virtualization.
So I got a yellow dog 5.0 iso image from here. I followed the instructions for installing it from here. I did this almost a year ago! Yes…I did actually keep it dead for a while! Then I installed gcc and compiled John the ripper! To my utter disappointment, there was no performance benefit!
Then Marc Bevand told me at Toorcon X that I needed spu-gcc to compile JTR on PS3 to get the benefits. So I got the cell sdk ISO from here. I then mounted the ISO.
mount -o loop cellsdk11.iso /mnt/disk
cd /mnt/disk
cd software
./cellsdk install
I got bunch of errors. It wanted me to install freeglut-devel and tk-8.4.*.
Thus began my journey of 10,000 steps to get the dependencies resolved and I burnt my fingers, keyboard, brains, etc….and although I realized that everyone in the US and the world had found hope…things were not looking bright for me! Until I bumped into this fantastic repository here. Trust me it took me about 8 hours of incessant installing and compiling (almost 120 odd different files) and scores of google searches to land me into this. I installed the glut, libx11, tk, tcl, libtcl, glut-devel, libstdc++, libstdc-so7, and many other packages that I cannot even recall now to get the cellsdk to work! And even though I did, I still couldnt get ./cellsdk install to work! After about 8 hours of effort and being so close to success just seemed evil. Then I realized that all the packages needed were related to the PPC64 simulator (libx11.ppc64, libtcl.ppc.64, etc.)…a quick look into the readme told me that I could neglect that using the –nosim directive to make it work.
Finally,
./cellsdk install –nosim
worked!!!!!
A small step for mankind but a giant step for me!

0

Reverse tunnels

-

SSH is an excellent piece of software which can help you do a lot of things such as have encrypted shells etc. But what makes SSH incredibly flexible is having tunnels.

A typical ssh tunnel works from the client to the ssh server and it forwards a local port on the client to the server seamlessly.

client ----> ssh_conn ----> ssh_server
client --> tunneled_port --> ssh_server
ssh -L 10000:localhost:10000 username@ssh_server

This connection creates a tunneled port on client:10000 i.e., anything sent to this port appears as if it’s automatically sent to ssh_server on port 10000. The localhost here is confusing, but think of it as….”what is localhost for ssh_server?”. It would be the ssh_server itself, right?
If you do a netstat on the client, you see a listener on the port 10000/tcp.

Now comes the more interesting reverse tunnel. The reverse tunnel is different in that, you have a tunnel being initiated by the client that says to the ssh server, “Hey, I’m initiating this connection that will allow you to automatically access a port on *me* after *I* initiate the connection?” (confused!!?!)

client ---> ssh_connection ---> server  ---+
                                           |
client <-- tunneled_port  <----- server ---+
ssh -NR 10000:localhost:10000 user@ssh_server

Here the meaning of localhost is slightly different, though.  The “localhost” means what is localhost for the client (and not on the server as in the previous case)!   So what you’re saying is, “Hey SSH server, I’m initiating this connection to you but if you connect to your port 10000 you will get a tunnel to *my* port 10000.”  If you do a netstat on the server you see a listener on port 10000. Isn’t it great that you can make the server listen to a port which acts as a tunnel to you…so anyone on the server can seamlessly connect to you even though technically you were the client!

0

Cisco VPN Client on BackTrack3

-

I wanted to install Cisco VPN client on BackTrack3. You can get the Cisco VPN client source using the following command:
wget ftp://ftp.cs.cornell.edu/pub/rvr/upload/vpnclient-linux-4.8.00.0490-k9.tar.gz
tar zxvf
vpnclient-linux-4.8.00.0490-k9.tar.gz
cd vpnclient/
wget http://tuxx-home.at/projects/cisco-vpnclient/vpnclient-linux-2.6.22.diff
patch < vpnclient-linux-2.6.22.diff
./vpn_install

I got this information from the following blog.
I ran into an error whereby the kernel sources were not found for the VPN client to install. I then got the BackTrack3 kernel sources.
cd /lib/
wget http://www.offensive-security.com/kernel.lzm
mkdir test
lzm2dir kernel.lzm test

Now go into the vpnclient directory and execute the following:
./vpn_install

Accept the defaults (except in my case I selected “No” on automatically start VPN client). When it asks for the sources point it to:
/lib/test/usr/src/linux-2.6.21.5

Then the VPN Client should compile without any issues. Then you just need to place your Cisco VPN client Profile (.pcf) in the /etc/opt/cisco-vpnclient/Profiles directory. You will need to first start the VPN client service first using:

/etc/init.d/vpnclient_init start

Once the service is started just connect using:

vpnclient connect mypcffile user test password <whatever>

Please note that the full name of the Profile file in the above case is mypcffile.pcf but I’ve deliberately excluded the .pcf extension.
This should work.

0

Java & Oracle

-

I was looking at some Oracle databases recently and I saw that the Oracle Auditing Tool (OAT) is an awesome toolset but you just need to download the classes12.zip which are the Oracle JDBC drivers for Java into that same directory. I downloaded the classes12.zip from the Oracle site and placed it into the same folder as OAT. On linux, the .sh files will then need some editing. Just replace classes111.zip to classes12.zip and off you go.
Patrik Karlsson has done an awesome job of providing these tools. You can do the whole gamut of operations using this tool from first guessing the Oracle SID to checking for default passwords using opwg.sh.
sudo ./opwg.sh -s 192.168.1.101
The above command will give you the Oracle SID for the remote database.
Once you have the sid and the credentials you can run queries using oquery.sh
sudo ./oquery.sh -s 192.168.1.101 -u DBSNMP -p DBSMP -d db_sid_found -q "select 1 from dual"
The source of the OAT is also provided here: http://www.cqure.net/tools/oat-source-1.3.1.zip. I found an interesting decompiler for Java too (when I overlooked that the sources existed on cqure.net website) and it’s called jd-gui. It works wonderfully on linux.