Application Security and the Incident Response Process


Application or software security is a field of infinite complexity. All of us know that where there is complexity, security issues lurk around in those dark corners. This post is about my ramblings on how I believe Application Security and Incident Response need to come together to handle incidents. As usual, I make references to the cloud because that’s what I do for my day job but the argument is remarkably similar even for on-premises companies that are not in the cloud.

First came the test….

When software fails, before it is deployed, it is typically deemed to be a “safe” failure i.e., someone who is looking can do something about it. In the cloud, what makes software development easy is that the cloud makes it easy for people to look. In AWS, for example, you have Cloudwatch and you also have things like liveness checks in Load balancers which can “trigger” events and you can create “operational handlers” for those trigger events via technologies like AWS Lambda and AWS Eventbridge. This is not new to security people. For as long as I can remember, Security operations and IT Ops teams (from the pre “cyber” days), always had runbooks on what to do when something did not work as expected. Today, this happens a lot as the complexity of software has increased. The software engineers need to ensure that they have “eyes” in their test environment even more than they have in production because, quite frankly, they can have as much instrumentation as they want in the test environments because who cares if there is a minor performance impact. It’s important to re-architect so that there is no impact due to observability but you can still go to town as long as the means justify the ends. Keeping a close eye on test failures, helps avoid failures in production – this is all captain obvious advice. So in the cloud, what does that translate to? In the cloud, you can tag resources and you can have separate dashboards for those tags. Ensuring that you pay a close attention to “thresholds” on when to alert and what to do when certain measures / thresholds hit. You can also get creative in using anomaly detection tooling or “Machine Learning” (there … I said it!). The core point being – the more you are observing your test environment, the less failures you see in production. And when you see less failures in production you can do what I am going to talk about next.

Then came the failure…

When software fails in production, assuming you have been diligent in your testing and have encountered “exception handlers” either in code or in operations, it is categorized as an incident. While operational incidents are just as important and may have as much impact as security incidents, I will restrict this discussion to security incidents (because that’s what I do). What sets apart organizations such as AWS and other mature software shops from the rest of them is their rigor and approach of blameless post-mortems on security incidents. Everyone, loves to say “there will be no finger pointing” but in reality that’s really, really difficult to implement and that’s where the culture of fact-finding has a big impact on “getting to the bottom of it”. A security incident *never* happens without an error on someone’s part – this is critical to recognize. For a security incident to happen, someone had to make a mistake but its important to realize that in the security world, there is no exact science so mistakes are inevitable – what “security maturity” means is how you don’t make a knee-jerk reaction (aka scorched earth mentality) and recover from it stronger. The more maturity companies show in diving into whether the security incident involved a requirements error, design error, implementation error or deployment error tells application development teams where they can improve their development process and also tells their Application security engineers where they can improve their pre-deployment checkers or integration tests as well as areas where tooling is developed to detect deviations from security expectations (or “security invariants”). This also tells the development teams, where they can improve their developer tooling so the misses don’t recur. Also, the operations teams should determine where their checkers can do better and how the response times can be reduced to say half of what it took. Asking the question “why?” repeatedly on reasoning about the events during an incident can give a unique insight on how to improve.

Then came the win!

The situation where the AppSec, AppDev, SecOps, IT Ops teams collaborate to ensure they can help each other to do such blameless post-mortems end up having better telemetry, better tooling process, better detection when something goes wrong (“alerts”) and also a healthier environment of positive feedback. No one is blameless, recognize it, improve from it and help each other for better organizational security.


A brief history of SSRF


Server-Side Request Forgery is a security issue in applications where an attacker is able to get a server to send some type of a request (these days mostly HTTP/s requests) that the server should not be able to send. This issue is the classic abuse of trust vulnerability – the server tends to sit in a “trusted” environment (e.g., DMZ, your cloud VPC, etc.) and the users of the application sit outside the trust boundary (e.g., mobile devices, cafe, home, corporate environments, API clients within and outside cloud).

A brief history
In this blog post though, I won’t be talking about all the fancy new things that have had SSRF issues – you can likely find a few hundreds of those anyway! I am going to be talking about a brief history of this issue and what happened before we gave this issue a “name” – SSRF. The earliest references to the name “SSRF” appear to come from a talk done in BlackHat US 2012, and wayback machine tells me that the CWE-918 page was authored sometime around 2013. If you look closely at the CWE-918 page though, you will find that there were old CVEs dating back to 2002, and 2004. There was a Shmoocon talk about it in 2008 too but the term SSRF was not established until 2012.

The Issue

I was working on a penetration test for a financial services firm in 2010 of a popular load balancer that offered a GTM (Global traffic manager) solution and that allowed folks to login and obtain restricted execution environments wherefrom certain applications could be exposed and that would allow remote workers or untrusted entities who you only want to expose certain applications could use. The issue was in a POST request post login and IIRC even pre-login (though my details on this are fuzzy). This might be circa 2010 timeframe and to my knowledge the issue was not issued a CVE and wasn’t associated with Knowledge base article – I may not know 100%. The guidance from the vendor was simple – update the software and move on. The issue was this – an authenticated or an unauthenticated user would send an HTTP POST request to a page with a base64 encoded parameter that included a hostname which would trigger a DNS request on the back end of the GTM site. The time it took for the response to get back would indicate whether the domain was legitimate or not. So I used the a popular dictionary and enumerate all the hostnames from that directory that were legitimate and the ones that were not sitting outside on the Internet and mapping the hosts on the internal network.

The backstory
What the vendor of the GTM software did not know was how critical this application was to the business of the customer. They seemed to be dragging their feet without updates and meanwhile the customer – a financial institution with lots at stake could not go live. The pressure mounted on the IT staff to fix the issue and the vendor while being responsive was unable to give a firm date quickly – remember this was 13-14 years ago prior to bug bounties and responsible disclosures still were quite clunky! And the customer was also advising me to push the software vendor so we could discuss. Thankfully, on the vendor side, there was a solid security person who understood the issue immediately and its impact and advised the software teams to do what was right. They made the process post authentication and they also added tokens, limits and constant time responses to fix the issue.

Fast forward
Today, obviously things are a lot better. And I wrote this blog post so the old me can look back and point to this in a meaningful way without forgetting the old experiences among the new.


Filing Tax Assessment Appeal in Jersey City


In this post, I will cover a how to for filing a resident’s tax appeal. It’s quite simple. This is not meant to cover all special situations but should cover simple situations if you live in a condo in Jersey City for example. For other situations, review the handbook listed below.
Most importantly – this appeal needs to be in the hands of the folks by Dec 1 2022 otherwise it will be rejected. Therefore, really important to visit the office and hand it over in-person the tax officer said. You could also send it via a certified mail.

Important Links

  1. Where to get the appeal form for mid-year added/omitted assessment https://www.state.nj.us/treasury/taxation/pdf/other_forms/lpt/adomap.pdf
  2. N/A for mid-year: But if you are filing during the usual time January or Apr for annual tax changes use https://www.hcnj.us/wp-content/uploads/2021/12/a-1-petition-of-appeal.pdf
  3. Comparables are obtained from: https://www.zillow.com/b/20-2nd-st-jersey-city-nj-5XkRmF/
  4. Appeals handbook: https://secure.njappealonline.com/prodappeals/help/Hudson_InstructionsHandbook.pdf
  5. If you are filing an online appeal you can do so at http://www.njappealonline.com/prodappealonline/Home.aspx however, this site only works at certain times of the year. For example, in Nov 2022 the site is not accepting Hudson County appeals for some reason.

How to fill the form:

  • This is the link for the ratio for Jersey City municipality (max value is 100%) i.e., the cost of the sale price. Minimum value for Common Level Ratio in Jersey City for 2022 is 0.7426 and Max is 1.0. So if your unit value is assessed to be within the maximum and minimum range you do not qualify for an appeal.
    This is where you get the Common Level Ratio values from: https://www.state.nj.us/treasury/taxation/pdf/lpt/chap123/2022ch123.pdf . E.g., let’s say your unit value was assessed to be $1mn and comparative sale prices show that the total value of the unit is $950,000 this sale price is not within $1mn/0.7426 and $1mn/(1.0). This means that you qualify for an appeal. So your taxable value would be $950,000*0.8737 (Avg. value of the Common Level Ratio) = $830,015. At a 2.118% tax rate this would come to $17,580.
  • The is what the fields look like:
    Bock / Lot / Qualifier – you get it from your tax bill and also can be obtained from https://tax1.co.monmouth.nj.us/cgi-bin/prc6.cgi?district=0906&ms_user=monm by searching the site via address
  • Next go to Zillow (https://www.zillow.com/b/20-2nd-st-jersey-city-nj-5XkRmF/) and find the comparable sales for your unit for the pretax year (if you are appealing 2022 assessment, use 2021 sales). Goto https://tax1.co.monmouth.nj.us/cgi-bin/prc6.cgi?district=0906&ms_user=monm and find the sale dates and add that information to the form.
  • The prorated fields in the form can be left out because the county knows those values (so I did not fill those out, the county clerk did that for me)
  • Sign and date the form
  • You need to send one copy each to the following addresses via post or in-person (if the online system does not work).
    Hudson County Board of Taxation, Hudson County Plaza, 257 Cornelison Ave Room 303, Jersey City NJ 07302. You also need to send one copy to the city: Office of the City Assessor, 364 Martin Luther King Drive, Jersey City NJ 07305. Phone: 201-547-5131.

Update 12/29/2022:

I did go to the Hudson county court and appealed my decision in person. The city representatives were quite polite and the process was quite smooth – you just show up in the court and either accept or reject the city’s proposal. Once the judgment is reached they mail you the judgment which you can appeal for 45 days. After that the decision is binding for 2 years.


Pcaprub installation on Win 10 x64


If you encounter the following error, the issue is pcaprub uses a hardcoded path for Winpcap.  I downloaded winpcap v4.1.3 and downloaded the dev kit for Winpcap and put it in c:\WpdPack.   Additionally, since I use an x64 machine I had to copy the file C:\WpdPack\Lib\x64\*.lib into C:\WpdPack\Lib and then the compilation worked.

You need pcaprub for things like msf.


C:\dev\kit>gem install pcaprub
Temporarily enhancing PATH for MSYS/MINGW...
Building native extensions. This could take a while...
ERROR: Error installing pcaprub:
ERROR: Failed to build gem native extension.

current directory: C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/pcaprub-0.13.0/ext/pcaprub_c
C:/Ruby24-x64/bin/ruby.exe -r ./siteconf20181112-2628-1wqgu6f.rb extconf.rb

[*] Running checks for pcaprub_c code...
platform is x64-mingw32
checking for -lws2_32... yes
checking for -liphlpapi... yes
checking for windows.h... yes
checking for winsock2.h... yes
checking for iphlpapi.h... yes
checking for ruby/thread.h... yes
checking for rb_thread_blocking_region()... no
checking for rb_thread_call_without_gvl()... yes
checking for pcap_open_live() in -lwpcap... no
checking for pcap_setnonblock() in -lwpcap... no
creating Makefile

current directory: C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/pcaprub-0.13.0/ext/pcaprub_c
make "DESTDIR=" clean

current directory: C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/pcaprub-0.13.0/ext/pcaprub_c
make "DESTDIR="
generating pcaprub_c-x64-mingw32.def
compiling pcaprub.c
In file included from C:/WpdPack/include/pcap/pcap.h:41,
from C:/WpdPack/include/pcap.h:45,
from pcaprub.c:11:
C:/WpdPack/include/pcap-stdinc.h:64: warning: "snprintf" redefined
#define snprintf _snprintf

In file included from C:/Ruby24-x64/include/ruby-2.4.0/ruby/ruby.h:2429,
from C:/Ruby24-x64/include/ruby-2.4.0/ruby.h:33,
from pcaprub.c:1:
C:/Ruby24-x64/include/ruby-2.4.0/ruby/subst.h:6: note: this is the location of the previous definition
#define snprintf ruby_snprintf

In file included from C:/WpdPack/include/pcap/pcap.h:41,
from C:/WpdPack/include/pcap.h:45,
from pcaprub.c:11:
C:/WpdPack/include/pcap-stdinc.h:65: warning: "vsnprintf" redefined
#define vsnprintf _vsnprintf

In file included from C:/Ruby24-x64/include/ruby-2.4.0/ruby/ruby.h:2429,
from C:/Ruby24-x64/include/ruby-2.4.0/ruby.h:33,
from pcaprub.c:1:
C:/Ruby24-x64/include/ruby-2.4.0/ruby/subst.h:7: note: this is the location of the previous definition
#define vsnprintf ruby_vsnprintf

pcaprub.c: In function 'rbpcap_each_data':
pcaprub.c:992:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
fno = (int)pcap_getevent(rbp->pd);
pcaprub.c:992:7: warning: assignment to 'HANDLE' {aka 'void *'} from 'int' makes pointer from integer without a cast [-W
fno = (int)pcap_getevent(rbp->pd);
pcaprub.c: In function 'rbpcap_each_packet':
pcaprub.c:1034:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
fno = (int)pcap_getevent(rbp->pd);
pcaprub.c:1034:7: warning: assignment to 'HANDLE' {aka 'void *'} from 'int' makes pointer from integer without a cast [-
fno = (int)pcap_getevent(rbp->pd);
pcaprub.c: In function 'rbpcap_thread_wait_handle':
pcaprub.c:1274:7: warning: passing argument 1 of 'rb_thread_call_without_gvl' from incompatible pointer type [-Wincompat
In file included from pcaprub.c:4:
C:/Ruby24-x64/include/ruby-2.4.0/ruby/thread.h:28:7: note: expected 'void * (*)(void *)' but argument is of type 'VALUE
(*)(void *)' {aka 'long long unsigned int (*)(void *)'}
void *rb_thread_call_without_gvl(void *(*func)(void *), void *data1,
linking shared-object pcaprub_c.so
pcaprub.o:pcaprub.c:(.text+0x1a0): undefined reference to `pcap_lib_version'
pcaprub.o:pcaprub.c:(.text+0x1e0): undefined reference to `pcap_findalldevs'
pcaprub.o:pcaprub.c:(.text+0x2b8): undefined reference to `pcap_freealldevs'
pcaprub.o:pcaprub.c:(.text+0x32f): undefined reference to `pcap_lookupnet'
pcaprub.o:pcaprub.c:(.text+0x43d): undefined reference to `pcap_close'
pcaprub.o:pcaprub.c:(.text+0x45a): undefined reference to `pcap_dump_close'
pcaprub.o:pcaprub.c:(.text+0x67c): undefined reference to `pcap_set_timeout'
pcaprub.o:pcaprub.c:(.text+0x6ce): undefined reference to `pcap_list_datalinks'
pcaprub.o:pcaprub.c:(.text+0x707): undefined reference to `pcap_datalink_val_to_name'
pcaprub.o:pcaprub.c:(.text+0x76d): undefined reference to `pcap_free_datalinks'
pcaprub.o:pcaprub.c:(.text+0x782): undefined reference to `pcap_geterr'
pcaprub.o:pcaprub.c:(.text+0x828): undefined reference to `pcap_datalink_name_to_val'
pcaprub.o:pcaprub.c:(.text+0x895): undefined reference to `pcap_set_datalink'
pcaprub.o:pcaprub.c:(.text+0x8b3): undefined reference to `pcap_geterr'
pcaprub.o:pcaprub.c:(.text+0x93f): undefined reference to `pcap_set_snaplen'
pcaprub.o:pcaprub.c:(.text+0x9d4): undefined reference to `pcap_set_promisc'
pcaprub.o:pcaprub.c:(.text+0xae1): undefined reference to `pcap_lookupnet'
pcaprub.o:pcaprub.c:(.text+0xb57): undefined reference to `pcap_compile'
pcaprub.o:pcaprub.c:(.text+0xb6d): undefined reference to `pcap_geterr'
pcaprub.o:pcaprub.c:(.text+0xb9f): undefined reference to `pcap_setfilter'
pcaprub.o:pcaprub.c:(.text+0xbaf): undefined reference to `pcap_freecode'
pcaprub.o:pcaprub.c:(.text+0xbc1): undefined reference to `pcap_geterr'
pcaprub.o:pcaprub.c:(.text+0xbe9): undefined reference to `pcap_freecode'
pcaprub.o:pcaprub.c:(.text+0xc62): undefined reference to `pcap_compile'
pcaprub.o:pcaprub.c:(.text+0xc75): undefined reference to `pcap_geterr'
pcaprub.o:pcaprub.c:(.text+0xc9d): undefined reference to `pcap_freecode'
pcaprub.o:pcaprub.c:(.text+0xccf): undefined reference to `pcap_activate'
pcaprub.o:pcaprub.c:(.text+0xd33): undefined reference to `pcap_close'
pcaprub.o:pcaprub.c:(.text+0xe0b): undefined reference to `pcap_close'
pcaprub.o:pcaprub.c:(.text+0xe43): undefined reference to `pcap_create'
pcaprub.o:pcaprub.c:(.text+0x109e): undefined reference to `pcap_close'
pcaprub.o:pcaprub.c:(.text+0x110d): undefined reference to `pcap_open_live'
pcaprub.o:pcaprub.c:(.text+0x129f): undefined reference to `pcap_open_offline'
pcaprub.o:pcaprub.c:(.text+0x1419): undefined reference to `pcap_open_dead'
pcaprub.o:pcaprub.c:(.text+0x1532): undefined reference to `pcap_dump_open'
pcaprub.o:pcaprub.c:(.text+0x15d9): undefined reference to `pcap_dump_close'
pcaprub.o:pcaprub.c:(.text+0x171e): undefined reference to `pcap_dump'
pcaprub.o:pcaprub.c:(.text+0x17e7): undefined reference to `pcap_sendpacket'
pcaprub.o:pcaprub.c:(.text+0x17fa): undefined reference to `pcap_geterr'
pcaprub.o:pcaprub.c:(.text+0x18ea): undefined reference to `pcap_setnonblock'
pcaprub.o:pcaprub.c:(.text+0x1912): undefined reference to `pcap_dispatch'
pcaprub.o:pcaprub.c:(.text+0x19fd): undefined reference to `pcap_setnonblock'
pcaprub.o:pcaprub.c:(.text+0x1a25): undefined reference to `pcap_dispatch'
pcaprub.o:pcaprub.c:(.text+0x1b35): undefined reference to `pcap_getevent'
pcaprub.o:pcaprub.c:(.text+0x1be3): undefined reference to `pcap_getevent'
pcaprub.o:pcaprub.c:(.text+0x1c91): undefined reference to `pcap_datalink'
pcaprub.o:pcaprub.c:(.text+0x1cdc): undefined reference to `pcap_major_version'
pcaprub.o:pcaprub.c:(.text+0x1d27): undefined reference to `pcap_minor_version'
pcaprub.o:pcaprub.c:(.text+0x1d72): undefined reference to `pcap_snapshot'
pcaprub.o:pcaprub.c:(.text+0x1dca): undefined reference to `pcap_stats'
collect2.exe: error: ld returned 1 exit status
make: *** [Makefile:259: pcaprub_c.so] Error 1

make failed, exit code 2

Gem files will remain installed in C:/Ruby24-x64/lib/ruby/gems/2.4.0/gems/pcaprub-0.13.0 for inspection.
Results logged to C:/Ruby24-x64/lib/ruby/gems/2.4.0/extensions/x64-mingw32/2.4.0/pcaprub-0.13.0/gem_make.out



PCI SSC Forbids SSL and “Early TLS”


On April 15, 2015 the PCI SSC released the PCI DSS v3.1.  The main cause for concern for most merchants and other entities (called “entities” hereonforth) that store, transmit and process cardholder data is the prohibition of using SSL and “Early TLS”.  The PCI SSC also released a supplement to assist entities in mitigating the issue.   The supplement references the NIST guideline SP800-52 rev1 for determining which are good ciphers and which are not.

The key point being what does “Early TLS” mean?  Does it mean TLSv1.0 and TLSv1.1 OR does it mean only TLSv1.0?  Are the entities supposed to disable all ciphers except anything that’s TLSv1.2?

Answer is (in consultant speak) “it depends”. 🙂

TLSv1.1 does theoretically have ciphers that are not ideal.  Example: CBC mode ciphers that are TLSv1.1 but there may be a potential for attacks on them given that in the past couple of years CBC has fallen multiple times (BEAST, POODLE).

Google Chrome lists the use of CBC-based ciphers (despite the fact that they’re TLSv1.1) to be obsolete.  Google Chrome essentially makes “obsolete cryptography” a function of using TLS v1.2-based ciphers.


Firefox allows the configuration of disabling TLSv1.0 and that can be done by typing “about:config” in the address bar.  The security.tls.version.min = 0 (means SSLv3), 1 (means TLSv1.0), 2 (means TLSv1.1) and 3 (means TLSv1.2).  The following screenshot shows the configuration snapshot (here the lowest allowed version is TLSv1.0).


Let’s start with what is definitely ok for PCI:


 TLS_RSA_WITH_NULL_SHA256                  NULL-SHA256
 TLS_RSA_WITH_AES_128_CBC_SHA256           AES128-SHA256
 TLS_RSA_WITH_AES_256_CBC_SHA256           AES256-SHA256
 TLS_RSA_WITH_AES_128_GCM_SHA256           AES128-GCM-SHA256
 TLS_RSA_WITH_AES_256_GCM_SHA384           AES256-GCM-SHA384









 TLS_DH_anon_WITH_AES_128_CBC_SHA256       ADH-AES128-SHA256
 TLS_DH_anon_WITH_AES_256_CBC_SHA256       ADH-AES256-SHA256
 TLS_DH_anon_WITH_AES_128_GCM_SHA256       ADH-AES128-GCM-SHA256
 TLS_DH_anon_WITH_AES_256_GCM_SHA384       ADH-AES256-GCM-SHA384

Now let’s see what may potentially be good from TLSv1.1 perspective (from NIST SP8000-52 rev1):

TLS_RSA_WITH_AES_128_CBC_SHA            AES128-SHA

Here’s a problem though per OpenSSL man page:

If you’re using OpenSSL, how do you ensure that the browser is not negotiating the vulnerable TLSv1.0 ciphers? The only real answer seems to be by providing a cipher order for negotiation and hoping the client doesn’t cheat.  Most likely, the browser will negotiate a better cipher when it exists in the server and on the client and you’d avert the possibility of negotiation of a bad cipher.

According to experts, anything that uses CBC is inherently broken.  But disabling TLSv1.0 may make the server inaccessible to various older Android devices.  Also, if you’re using older Java Development Kits (JDK7 and below), do remember that the default ciphers may not hit the spot for PCI.

There’s an excellent site to help you configure each type of the server so you could become PCI compliant. This is an excellent site by Ivan Ristic to test your Internet-facing servers for configuration of SSL/TLS encryption.

In conclusion, configure browsers to minimally allow TLSv1.1 and configure servers to use TLSv1.2 to be PCI DSS compliant.  The road to TLSv1.1 compatibility and PCI DSS is filled with potholes and death-falls so do it at your own risk.