0

Installing mplayerplug-in for Firefox-1.0.4

-

I love the site www.big-boys.com but in linux it would not play so I wanted to install a browser plugin that would play wmv files.
Here’s how I went about it. First I installed mplayer using yum (I use FC4 with kernel 2.6.13.2).
yum install mplayer
Make sure the internet connection is present when you run this command.

Then I went to Linux Plugins site to get the mplayerplugin. It redirected me to Mplayer Sourceforge site.

Then I downloaded the source of mplayerplug-in from Sourceforge download page for Mplayerplug-in.
Once I did that then came the main struggle of compiling and getting this to run.

I first untarred the file with command:

tar zxvf mplayerplug-in-3.11.tar.gz
cd mplayerplug-in
./configure –with-gecko-sdk=/usr/include/mozilla-1.7.8/
make

But this resulted in a bunch of errors.
I realized that an extra slash was put in there so I opened the Makefile with vim and removed the extra slash at the end of the string /usr/include/mozilla-1.7.8/ in the Makefile.
Tried to make again but again errors this time around some include files called prtypes.h was missing.

I noticed that in the CFLAGS section of the Makefile there was a space between -I and /usr/include. Deleted those.

So opened the Makefile in vim again and added the string -I/usr/include/mozilla-1.7.8/nspr/ to the CFLAGS section of the Makefile.

Also added -L/usr/lib/firefox-1.0.4/ to the LFLAGS section coz I was getting some linker errors after that. The struggle was not over.

I got a linker error :

/usr/bin/ld: cannot find -lxpcomglue
collect2: ld returned 1 exit status

Changed the -lxpcomglue in Makefile to -lxpcom.
Finally, the compilation and the build were successful. Then the final command
cp mplayerplug-in*.so /usr/lib/firefox-1.0.4/plugins/
And now I have mplayerplug-in live and kicking!

-Rajat.

0

Cisco IPSec VPN Client Reason442: Failed to Enable Virtual Adapter

-

If you use Windows 8 x64 and when you launch the Cisco VPN Client adapter and you see the following error:
Reason 442: Failed To Enable Virtual Adapter Here’s how to fix it.
Open your command prompt in Administrator mode by right clicking at the left lower corner of the screen and going to “Command Prompt (Administrator)”. You will have to log in as an administrator. Launch registry editor by typing “regedit.exe”. Browse to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CVirtA“. In the DisplayName key, you will see something like @oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter. Edit that to just say Cisco Systems VPN Adapter. Try to connect again by launching the VPN Client. It should work!

0

Application Security and the Incident Response Process

-

Application or software security is a field of infinite complexity. All of us know that where there is complexity, security issues lurk around in those dark corners. This post is about my ramblings on how I believe Application Security and Incident Response need to come together to handle incidents. As usual, I make references to the cloud because that’s what I do for my day job but the argument is remarkably similar even for on-premises companies that are not in the cloud.

First came the test….

When software fails, before it is deployed, it is typically deemed to be a “safe” failure i.e., someone who is looking can do something about it. In the cloud, what makes software development easy is that the cloud makes it easy for people to look. In AWS, for example, you have Cloudwatch and you also have things like liveness checks in Load balancers which can “trigger” events and you can create “operational handlers” for those trigger events via technologies like AWS Lambda and AWS Eventbridge. This is not new to security people. For as long as I can remember, Security operations and IT Ops teams (from the pre “cyber” days), always had runbooks on what to do when something did not work as expected. Today, this happens a lot as the complexity of software has increased. The software engineers need to ensure that they have “eyes” in their test environment even more than they have in production because, quite frankly, they can have as much instrumentation as they want in the test environments because who cares if there is a minor performance impact. It’s important to re-architect so that there is no impact due to observability but you can still go to town as long as the means justify the ends. Keeping a close eye on test failures, helps avoid failures in production – this is all captain obvious advice. So in the cloud, what does that translate to? In the cloud, you can tag resources and you can have separate dashboards for those tags. Ensuring that you pay a close attention to “thresholds” on when to alert and what to do when certain measures / thresholds hit. You can also get creative in using anomaly detection tooling or “Machine Learning” (there … I said it!). The core point being – the more you are observing your test environment, the less failures you see in production. And when you see less failures in production you can do what I am going to talk about next.

Then came the failure…

When software fails in production, assuming you have been diligent in your testing and have encountered “exception handlers” either in code or in operations, it is categorized as an incident. While operational incidents are just as important and may have as much impact as security incidents, I will restrict this discussion to security incidents (because that’s what I do). What sets apart organizations such as AWS and other mature software shops from the rest of them is their rigor and approach of blameless post-mortems on security incidents. Everyone, loves to say “there will be no finger pointing” but in reality that’s really, really difficult to implement and that’s where the culture of fact-finding has a big impact on “getting to the bottom of it”. A security incident *never* happens without an error on someone’s part – this is critical to recognize. For a security incident to happen, someone had to make a mistake but its important to realize that in the security world, there is no exact science so mistakes are inevitable – what “security maturity” means is how you don’t make a knee-jerk reaction (aka scorched earth mentality) and recover from it stronger. The more maturity companies show in diving into whether the security incident involved a requirements error, design error, implementation error or deployment error tells application development teams where they can improve their development process and also tells their Application security engineers where they can improve their pre-deployment checkers or integration tests as well as areas where tooling is developed to detect deviations from security expectations (or “security invariants”). This also tells the development teams, where they can improve their developer tooling so the misses don’t recur. Also, the operations teams should determine where their checkers can do better and how the response times can be reduced to say half of what it took. Asking the question “why?” repeatedly on reasoning about the events during an incident can give a unique insight on how to improve.

Then came the win!

The situation where the AppSec, AppDev, SecOps, IT Ops teams collaborate to ensure they can help each other to do such blameless post-mortems end up having better telemetry, better tooling process, better detection when something goes wrong (“alerts”) and also a healthier environment of positive feedback. No one is blameless, recognize it, improve from it and help each other for better organizational security.

0

Shmoocon in DC

-

I’ll be attending the Shmoocon in Washington, DC from Feb 6th-8th. Hope to see all you h4X0rs out there!

0

SMBProxy Compilation issues

-

So the other day I was on a pen test and I got hold of the hashes. Since my laptop got fried I needed a new version of SMBProxy. There were a few issues that I had with the compilation though. I got a few errors in the file crypto.c.
Moreover, SMBProxy ues crypto library libdes written by Eric Young available here.
I give here a guide to compiling SMBProxy that worked for me.

First, compile and install libdes

  1. Download libdes 4.01
  2. tar zxvf libdes-4.01.tar.gz
  3. cd libdes
  4. make gcc
  5. sudo make install

Now, you’ll find that the file libdes.a is now in /usr/local/lib.
Second, compile and install SMBProxy. Now here there were a couple of compilation errors that I had to deal with.
Here’s the diff output for crypto.c

trance@z0n3:~/Desktop$ diff smbproxy/crypto.c smbproxy-orig-src/crypto.c
40,41c40
< #include
< #define MD4_SIGNATURE_SIZE 16 --- >
46c45
<> static u_char Get7Bits(UCHAR *input, int startBit) {
58c57
<> static void MakeKey(UCHAR *key, UCHAR *des_key) {
74c73
<> void DesEncrypt(UCHAR *clear, UCHAR *key, UCHAR *cipher) {
85c84
<> void mkResponse(UCHAR **ntlmhash, UCHAR hash[MD4_SIGNATURE_SIZE], UCHAR* challenge) {
88c87
<> UCHAR ntlm_response[24];

Having done this there were still a few issues with the make comand.
The Makefile can be generated by running the following command:

trance@z0n3:~/Desktop/smbproxy-orig-src$ ./configure

Here’s the diff output of the Makefile:

trance@z0n3:~/Desktop$ diff smbproxy/Makefile smbproxy-orig-src/Makefile
10,11c10,11
< smbbf_include =" -Iinclude">
< libs ="">

> SMBBF_INCLUDE = -Iinclude
> LIBS = des
31c31
< $(LIBDES) $(LIBS)

> $(LIBDES)

The following libraries are required: openssl, openssl-dev, libdes for successfully compiling SMBProxy.

apt-get install openssl openssl-dev

0

Machine Learning Security in the age of Supply Chain Attacks

-

As can be seen from the recent “xz attack” discovery that there nation states have realized that this is likely the “best” vector to impact large-scale systems in big organizations. With the cloud computing providers being the “source of computing” for most large corporations today, we should anticipate that a larger portion of the attacks will fall into this category. Also, just like “sleeper cells” in traditional espionage, such “sleepers” may exist in numerous OSS projects. Does that mean we should stop using open source – hell no. All that means is we just need to be careful. Can we detect these attacks? It’s tough to detect but yes we can detect them by good ol’ school, telemetry and observability.

But that’s not what this blog post is about. I think the most interesting bit from the xz attack for me was that the libraries that get harder to debug and decode are much juicier targets. How does that matter? The ML libraries that are super popular like pytorch and tensorflow and others are quite hard to compile out of bound from scratch. Such libraries can have interesting attack vectors which allow nice pickle compromises. I say “nice” because the family of insecure deserialization has existed in CWE since 2006! It’s older than many other issues and will continue to exist.

My only hope is that maintainers of core ML projects such as PyTorch, Tensorflow, keras and others start showing a slightly higher level of paranoia and build reproducibility so the supply chain attacks can be avoided on such harder to debug libraries.

0

Security Considerations in Blue-Green Deployments

-

tl,dr; Blue-Green deployments for critical uptime applications is a strong deployment strategy but if a deployment fixes critical security issues be sure that the definition of “deployment complete” is decommissioning of the “blue” environment and not just deployment of “green” successfully.

Organizations have gotten used to following Continuous Integration/Continuous Deployment (CI/CD) for software releases. The use of cloud solutions such as AWS Code* utilities, Azure DevOps or Google Cloud Source repositories enables enterprises to quickly and securely accomplish CI/CD for software release cycles. Software upgrades undergo the same CI/CD tooling and the dev teams need to choose how to do the upgrades.

There are a few different ways development teams upgrade their applications – full cut-over (same infrastructure, new codebase deployed directly and migrated in one go), rolling deployments (same infra or new infra gradually upgrading all instances), immutable (brand new infra and code for each deployment and migrated in one go), blue-green deployment (same infra, simultaneously deployed in prod, gradually phasing out old instances upon successful tests from a section of traffic). The Blue-green deployment strategy has, therefore, become quite popular for modern software deployment.

What is a Blue-Green deployment?
When you release an application via Blue-Green deployment strategy you gradually shift traffic as tests succeed and your observability (Cloudwatch alarms, etc.) does not indicate any problems. You can do that via Containers (ECS/EKS/AKS/GKE) or AWS Lambda/Azure Functions/Google Cloud Functions and traffic shifting can be done with the help of DNS solutions (Route53/Azure DNS/Google Cloud DNS), Load balancing solutions (AWS Elastic Load Balancer/Azure Load Balancer/Cloud Load Balancing). Simplistically, take your current (blue) deployment and create a full stack (green) and use either DNS or load balancers to slice out a traffic section and test the “green” stack. This is all happening in production, by the way. Once everything looks good, direct all the traffic to green and decommission the “blue”. This helps maintain operational resilience and, therefore, this is a popular deployment strategy. AWS has solid whitepaper I recommend to review to dive in from a solution architecture standpoint if you are interested.

Security Considerations

Some critical security issues (e.g., remote code execution via Log4j, remote code execution via struts, etc.) demand immediate fixes because of their severity. If your blue-green deployments are going to take days and your tests will run over a very long period (say days) then any security fixes you make also will get fixed after a successful “green” deployment and “blue decommission” only. If during that window or prior, an attacker managed to get a foothold into the impacted “blue” environment, then even decommissioning of the “blue” becomes critical to claim the issue is fully remediated. Typically, when incident responders and security operations professionals breathe a sigh of relief is when the fix is deployed. Typically, the software engineering teams consider fix as deployed is when the “green” is fully handling all traffic (and its unrelated to the decommissioning of the “blue” environment). In this case, the incident responders need to remember its not the deployment time when the risk is truly mitigated, its mitigated after completion of cut-over to green and decommissioning of the blue. There is a subtle, yet important, difference here – and it really comes down to the use of shared vocabulary. As long as security operations and software development teams both have this shared definition of what deployment means, there are no misunderstandings.