1

Kerberos/Samba/AD account lockouts

-

I kept getting the following errors on my AD domain in the event viewer and accounts kept locking out:
Pre-authentication failed:
User Name:      user1
User ID:                DOMAIN\user1
Service Name:   krbtgt/DOMAIN.COM
Pre-Authentication Type:        0x0
Failure Code:   0x12
Client Address: 192.168.246.134

For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.

In the Directory Service logs I see the following entry:
[snip]
Active Directory could not update the following object with changes
received from the domain controller at the following network address
because Active Directory was busy processing information.

Object:
CN=User 1,OU=Testing Services Team,OU=TESTER V,DC=domain,DC=com
Network address:
e5523049-53f1-4274-858b-

c68971599acf._msdcs.domain.com

This operation will be tried again later.

For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.

Turns out this happens if you have samba/winbind/AD type infrastructure. If someone has some processes running (Even if they us sudo) and happen to change their password while the process is running on unix (and using kerberos authentication), the accounts lockout because the kerberos ticket granting ticket (krbtgt) is not current and any object access is considered to be a failed login attempt. This locks out the accounts if you have account lockout implemented in your AD domain security policy.

0

PCI SSC Forbids SSL and “Early TLS”

-

On April 15, 2015 the PCI SSC released the PCI DSS v3.1.  The main cause for concern for most merchants and other entities (called “entities” hereonforth) that store, transmit and process cardholder data is the prohibition of using SSL and “Early TLS”.  The PCI SSC also released a supplement to assist entities in mitigating the issue.   The supplement references the NIST guideline SP800-52 rev1 for determining which are good ciphers and which are not.

The key point being what does “Early TLS” mean?  Does it mean TLSv1.0 and TLSv1.1 OR does it mean only TLSv1.0?  Are the entities supposed to disable all ciphers except anything that’s TLSv1.2?

Answer is (in consultant speak) “it depends”. 🙂

TLSv1.1 does theoretically have ciphers that are not ideal.  Example: CBC mode ciphers that are TLSv1.1 but there may be a potential for attacks on them given that in the past couple of years CBC has fallen multiple times (BEAST, POODLE).

Google Chrome lists the use of CBC-based ciphers (despite the fact that they’re TLSv1.1) to be obsolete.  Google Chrome essentially makes “obsolete cryptography” a function of using TLS v1.2-based ciphers.

Untitled2

Firefox allows the configuration of disabling TLSv1.0 and that can be done by typing “about:config” in the address bar.  The security.tls.version.min = 0 (means SSLv3), 1 (means TLSv1.0), 2 (means TLSv1.1) and 3 (means TLSv1.2).  The following screenshot shows the configuration snapshot (here the lowest allowed version is TLSv1.0).

Untitled3

Let’s start with what is definitely ok for PCI:

https://www.openssl.org/docs/apps/ciphers.html#TLS-v1.2-cipher-suites

 TLS_RSA_WITH_NULL_SHA256                  NULL-SHA256
 TLS_RSA_WITH_AES_128_CBC_SHA256           AES128-SHA256
 TLS_RSA_WITH_AES_256_CBC_SHA256           AES256-SHA256
 TLS_RSA_WITH_AES_128_GCM_SHA256           AES128-GCM-SHA256
 TLS_RSA_WITH_AES_256_GCM_SHA384           AES256-GCM-SHA384

 TLS_DH_RSA_WITH_AES_128_CBC_SHA256        DH-RSA-AES128-SHA256
 TLS_DH_RSA_WITH_AES_256_CBC_SHA256        DH-RSA-AES256-SHA256
 TLS_DH_RSA_WITH_AES_128_GCM_SHA256        DH-RSA-AES128-GCM-SHA256
 TLS_DH_RSA_WITH_AES_256_GCM_SHA384        DH-RSA-AES256-GCM-SHA384

 TLS_DH_DSS_WITH_AES_128_CBC_SHA256        DH-DSS-AES128-SHA256
 TLS_DH_DSS_WITH_AES_256_CBC_SHA256        DH-DSS-AES256-SHA256
 TLS_DH_DSS_WITH_AES_128_GCM_SHA256        DH-DSS-AES128-GCM-SHA256
 TLS_DH_DSS_WITH_AES_256_GCM_SHA384        DH-DSS-AES256-GCM-SHA384

 TLS_DHE_RSA_WITH_AES_128_CBC_SHA256       DHE-RSA-AES128-SHA256
 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256       DHE-RSA-AES256-SHA256
 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256       DHE-RSA-AES128-GCM-SHA256
 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384       DHE-RSA-AES256-GCM-SHA384

 TLS_DHE_DSS_WITH_AES_128_CBC_SHA256       DHE-DSS-AES128-SHA256
 TLS_DHE_DSS_WITH_AES_256_CBC_SHA256       DHE-DSS-AES256-SHA256
 TLS_DHE_DSS_WITH_AES_128_GCM_SHA256       DHE-DSS-AES128-GCM-SHA256
 TLS_DHE_DSS_WITH_AES_256_GCM_SHA384       DHE-DSS-AES256-GCM-SHA384

 TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256      ECDH-RSA-AES128-SHA256
 TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384      ECDH-RSA-AES256-SHA384
 TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256      ECDH-RSA-AES128-GCM-SHA256
 TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384      ECDH-RSA-AES256-GCM-SHA384

 TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256    ECDH-ECDSA-AES128-SHA256
 TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384    ECDH-ECDSA-AES256-SHA384
 TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256    ECDH-ECDSA-AES128-GCM-SHA256
 TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384    ECDH-ECDSA-AES256-GCM-SHA384

 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256     ECDHE-RSA-AES128-SHA256
 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384     ECDHE-RSA-AES256-SHA384
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256     ECDHE-RSA-AES128-GCM-SHA256
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384     ECDHE-RSA-AES256-GCM-SHA384

 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256   ECDHE-ECDSA-AES128-SHA256
 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384   ECDHE-ECDSA-AES256-SHA384
 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256   ECDHE-ECDSA-AES128-GCM-SHA256
 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384   ECDHE-ECDSA-AES256-GCM-SHA384

 TLS_DH_anon_WITH_AES_128_CBC_SHA256       ADH-AES128-SHA256
 TLS_DH_anon_WITH_AES_256_CBC_SHA256       ADH-AES256-SHA256
 TLS_DH_anon_WITH_AES_128_GCM_SHA256       ADH-AES128-GCM-SHA256
 TLS_DH_anon_WITH_AES_256_GCM_SHA384       ADH-AES256-GCM-SHA384
 TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256 ECDHE-ECDSA-CAMELLIA128-SHA256
 TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384 ECDHE-ECDSA-CAMELLIA256-SHA384
 TLS_ECDH_ECDSA_WITH_CAMELLIA_128_CBC_SHA256  ECDH-ECDSA-CAMELLIA128-SHA256
 TLS_ECDH_ECDSA_WITH_CAMELLIA_256_CBC_SHA384  ECDH-ECDSA-CAMELLIA256-SHA384
 TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256   ECDHE-RSA-CAMELLIA128-SHA256
 TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384   ECDHE-RSA-CAMELLIA256-SHA384
 TLS_ECDH_RSA_WITH_CAMELLIA_128_CBC_SHA256    ECDH-RSA-CAMELLIA128-SHA256
 TLS_ECDH_RSA_WITH_CAMELLIA_256_CBC_SHA384    ECDH-RSA-CAMELLIA256-SHA384

Now let’s see what may potentially be good from TLSv1.1 perspective (from NIST SP8000-52 rev1):

TLS_RSA_WITH_3DES_EDE_CBC_SHA           DES-CBC3-SHA
TLS_RSA_WITH_AES_128_CBC_SHA            AES128-SHA

Here’s a problem though per OpenSSL man page:
Untitled

If you’re using OpenSSL, how do you ensure that the browser is not negotiating the vulnerable TLSv1.0 ciphers? The only real answer seems to be by providing a cipher order for negotiation and hoping the client doesn’t cheat.  Most likely, the browser will negotiate a better cipher when it exists in the server and on the client and you’d avert the possibility of negotiation of a bad cipher.

According to experts, anything that uses CBC is inherently broken.  But disabling TLSv1.0 may make the server inaccessible to various older Android devices.  Also, if you’re using older Java Development Kits (JDK7 and below), do remember that the default ciphers may not hit the spot for PCI.

There’s an excellent site to help you configure each type of the server so you could become PCI compliant. This is an excellent site by Ivan Ristic to test your Internet-facing servers for configuration of SSL/TLS encryption.

In conclusion, configure browsers to minimally allow TLSv1.1 and configure servers to use TLSv1.2 to be PCI DSS compliant.  The road to TLSv1.1 compatibility and PCI DSS is filled with potholes and death-falls so do it at your own risk.

3

Hakin9 Subscription

-

I have been a subscriber to this magazine’s electronic edition since the past year. However, they’ve only sent me one copy of the magazine till date. The cost of the yearly subscription was $79 or something which makes it an extremely expensive magazine…1 issue for $79…that’s ridiculous!
All my efforts to contact monika.drygulska@hakin9.org or marta.ogonek@hakin9.org have been futile! I would like to discourage anyone who pays for this.
Has anyone else experienced this kind of sloppy service with Hakin9?
Update 06/23/2009:
Hakin9 finally contacted me, after I emailed them (again) based on Chris John Riley’s suggestion. They provided me with the missing issues. Better late than never Hakin9! Thanks!

0

Android Source Downloading Errors

-

Over this weekend I decided to download the Android source tree on my computer (Backtrack 4 R2). The BT4R2 is no longer supported by the Offsec/Backtrack guys (mutt, purehate, etc.).
To start off with I tried to follow the instructions listed here.
The first error I got was with Git, I was using a version earlier than 1.5.4. So I downloaded git version 1.7.4, compiled it and installed it. Then I got the error:
fatal: unable to find remote handler for 'https'
Too bad, I tried recompiling and what not, and I did have openssl…so what was the problem?
The problem was not having libcurl-devel library. So I downloaded the library and launched configure, make clean, make and make install to reinstall git. Now the error was gone.

On the step where I am supposed to execute the following:
$ repo init -u https://android.googlesource.com/platform/manifest
I got the following error:

Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner
self.run()
File "/usr/lib/python2.5/threading.py", line 446, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/user/bin/.repo/repo/subcmds/sync.py", line 182, in _FetchHelper
success = project.Sync_NetworkHalf(quiet=opt.quiet)
File "/home/user/bin/.repo/repo/project.py", line 926, in Sync_NetworkHalf
if alt_dir is None and self._ApplyCloneBundle(initial=is_new, quiet=quiet):
File "/home/user/bin/.repo/repo/project.py", line 1444, in _ApplyCloneBundle
exist_dst = self._FetchBundle(bundle_url, bundle_tmp, bundle_dst, quiet)
File "/home/user/bin/.repo/repo/project.py", line 1514, in _FetchBundle
size = r.headers['content-length']
File "/usr/lib/python2.5/rfc822.py", line 384, in __getitem__
return self.dict[name.lower()]
KeyError: 'content-length'
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner
self.run()
File "/usr/lib/python2.5/threading.py", line 446, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/user/bin/.repo/repo/subcmds/sync.py", line 182, in _FetchHelper
success = project.Sync_NetworkHalf(quiet=opt.quiet)
File "/home/user/bin/.repo/repo/project.py", line 926, in Sync_NetworkHalf
if alt_dir is None and self._ApplyCloneBundle(initial=is_new, quiet=quiet):
File "/home/user/bin/.repo/repo/project.py", line 1444, in _ApplyCloneBundle
exist_dst = self._FetchBundle(bundle_url, bundle_tmp, bundle_dst, quiet)
File "/home/user/bin/.repo/repo/project.py", line 1514, in _FetchBundle
size = r.headers['content-length']
File "/usr/lib/python2.5/rfc822.py", line 384, in __getitem__
return self.dict[name.lower()]
KeyError: 'content-length'
Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner
self.run()
File "/usr/lib/python2.5/threading.py", line 446, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/user/bin/.repo/repo/subcmds/sync.py", line 182, in _FetchHelper
success = project.Sync_NetworkHalf(quiet=opt.quiet)
File "/home/user/bin/.repo/repo/project.py", line 926, in Sync_NetworkHalf
if alt_dir is None and self._ApplyCloneBundle(initial=is_new, quiet=quiet):
File "/home/user/bin/.repo/repo/project.py", line 1444, in _ApplyCloneBundle
exist_dst = self._FetchBundle(bundle_url, bundle_tmp, bundle_dst, quiet)
File "/home/user/bin/.repo/repo/project.py", line 1514, in _FetchBundle
size = r.headers['content-length']
File "/usr/lib/python2.5/rfc822.py", line 384, in __getitem__
return self.dict[name.lower()]
KeyError: 'content-length'
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner
self.run()
File "/usr/lib/python2.5/threading.py", line 446, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/user/bin/.repo/repo/subcmds/sync.py", line 182, in _FetchHelper
success = project.Sync_NetworkHalf(quiet=opt.quiet)
File "/home/user/bin/.repo/repo/project.py", line 926, in Sync_NetworkHalf
if alt_dir is None and self._ApplyCloneBundle(initial=is_new, quiet=quiet):
File "/home/user/bin/.repo/repo/project.py", line 1444, in _ApplyCloneBundle
exist_dst = self._FetchBundle(bundle_url, bundle_tmp, bundle_dst, quiet)
File "/home/user/bin/.repo/repo/project.py", line 1514, in _FetchBundle
size = r.headers['content-length']
File "/usr/lib/python2.5/rfc822.py", line 384, in __getitem__
return self.dict[name.lower()]
KeyError: 'content-length'
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner
self.run()
File "/usr/lib/python2.5/threading.py", line 446, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/user/bin/.repo/repo/subcmds/sync.py", line 182, in _FetchHelper
success = project.Sync_NetworkHalf(quiet=opt.quiet)
File "/home/user/bin/.repo/repo/project.py", line 926, in Sync_NetworkHalf
if alt_dir is None and self._ApplyCloneBundle(initial=is_new, quiet=quiet):
File "/home/user/bin/.repo/repo/project.py", line 1444, in _ApplyCloneBundle
exist_dst = self._FetchBundle(bundle_url, bundle_tmp, bundle_dst, quiet)
File "/home/user/bin/.repo/repo/project.py", line 1514, in _FetchBundle
size = r.headers['content-length']
File "/usr/lib/python2.5/rfc822.py", line 384, in __getitem__
return self.dict[name.lower()]
KeyError: 'content-length'
error: Exited sync due to fetch errors

Seems like this error is caused because the content-length http header is not sent by the repository. If you upgrade to Python 2.7.x you can resolve this error.
Now if you are compiling Python from source, it doesn’t come by default with SSL support. So to add SSL support you should edit the Python-2.7/Modules/Setup file and uncomment four lines:
_socket socketmodule.c
# Socket module helper for SSL support; you must comment out the other
# socket line above, and possibly edit the SSL variable:
SSL=/usr
_ssl _ssl.c \
-DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \
-L$(SSL)/lib -lssl -lcrypto

Of course, then you can do the standard steps to compile and install python:
$ ./configure
$ make
$ sudo make install

repo sync would work very well thereafter.

0

Setting up a Windows 7 Kernel Development Environment

-

If you are writing some Ring0 (or privileged mode code), say something like device drivers in Windows you’d probably be better served with a separate development machine and a deployment machine. This helps you to write poor code and still not lose hair because your development machine blue screens! 🙂

My setup was using a Windows 8.1 development machine and a Hyper-V based Windows 7 machine for debugging. You will need to execute different tasks on the “guest” (Hyper-V based Windows 7 virtual machine) and some other tasks on the development machine.  I followed many of the things from the MSDN blog post here

On your guest machine you would want to setup a named pipe and setup debug settings. To do that this is what you need to do:

Setup a virtual com port in the Hyper-V Settings (File -> Settings) , this port will be used to communicate from the host machine to the guest to communicate the Kernel debugging commands.
Untitled

 

Now make sure that your target guest machine is configured to “listen” those commands.  Inside the guest VM, start a command shell (cmd.exe -> Run as Administrator).

Untitled2

 

Configure the bcdedit commands so that the machine can now be debugged.  Right after the 2nd command, reboot your Virtual Machine.

Untitled3

 

With the VM now configured to listen the debug commands via the COM1 port, and the debug mode on in the bootup settings, now start the WinDbg x64 on the host (using “Run as administrator”; you need administrative privileges for communication via Serial port).  In your kernel debugger on the host or the development machine (I’m assuming that these are both on the same physical hardware here).  Click on File -> Kernel Debug and you should see the following screen in the WinDbg window:

Untitled4

Hit Ctrl+Break or Debug -> Break and you will see something like this:

Untitled5

Just remember that when you break in the debugger, your guest in Hyper-V should become “unresponsive”.  The only thing is that it is not really unresponsive, its just being debugged.  Just to make sure, that you have the symbols package that is quite useful for debugging run the following command:

!process 0 0

If you see something like the following screen show up:

Untitled6

The following error means that the symbols are not defined.  Symbols help the debugger give more information about the commands that you are going to execute in the debugger.

**** NT ACTIVE PROCESS DUMP ****
NT symbols are incorrect, please fix symbols

To fix this, use the following commands:

kd> .sympath SRV*c:\symcache*http://msdl.microsoft.com/download/symbols
kd> .symfix
kd> .symfix c:\symcache
kd> !sym noisy
kd> .reload /o

Then again try the command: !process 0 0 and see if you get a good response.  A good response looks like the following:

Untitled7

With this you should be good to go! Happy debugging and writing cool Ring0 code.

 

 

0

Cisco VPN Client on BackTrack3

-

I wanted to install Cisco VPN client on BackTrack3. You can get the Cisco VPN client source using the following command:
wget ftp://ftp.cs.cornell.edu/pub/rvr/upload/vpnclient-linux-4.8.00.0490-k9.tar.gz
tar zxvf
vpnclient-linux-4.8.00.0490-k9.tar.gz
cd vpnclient/
wget http://tuxx-home.at/projects/cisco-vpnclient/vpnclient-linux-2.6.22.diff
patch < vpnclient-linux-2.6.22.diff
./vpn_install

I got this information from the following blog.
I ran into an error whereby the kernel sources were not found for the VPN client to install. I then got the BackTrack3 kernel sources.
cd /lib/
wget http://www.offensive-security.com/kernel.lzm
mkdir test
lzm2dir kernel.lzm test

Now go into the vpnclient directory and execute the following:
./vpn_install

Accept the defaults (except in my case I selected “No” on automatically start VPN client). When it asks for the sources point it to:
/lib/test/usr/src/linux-2.6.21.5

Then the VPN Client should compile without any issues. Then you just need to place your Cisco VPN client Profile (.pcf) in the /etc/opt/cisco-vpnclient/Profiles directory. You will need to first start the VPN client service first using:

/etc/init.d/vpnclient_init start

Once the service is started just connect using:

vpnclient connect mypcffile user test password <whatever>

Please note that the full name of the Profile file in the above case is mypcffile.pcf but I’ve deliberately excluded the .pcf extension.
This should work.

0

The historical evolution of Cross-Site Request Forgery

-

Having been in application security for more than 2 decades now and officially completing my 18th year now of being meaningfully employed in that space there is just a lot of crud that I have gathered in my brain. Most of that is history of how things came about to be. That stuff is likely not interesting to most but I find it intriguing as to how some seemingly minor decisions of one software vendor can have massive impact to the web application security industry.

Oh the dreaded IE…
Internet Explorer 4, 5 and 6 that started in the Windows XP days (or even earlier, can’t recall) had a setting – the cookie jar was not shared – i.e., if you opened a new window to a site, you would have to log in again unless you used “Ctrl+N” key to open a new window from an existing session. Each new process would have its own cookie jar. For the uninitiated, the “cookie jar” is the internal browser storage of cookies. Cookies are random looking strings that indicate a “trust token” that a web server places in the browser. Since HTTP is a connectionless protocol, this cookie is what preserves the “state” and this is exactly what authorization decisions in HTTP context are typically based on. These cookies are stored in a web browser data storage called cookie jar where each cookie gets stored with the name, value, domain, path (and today, there are few other attributes but that wasn’t the case back in 2004-2005). The browser gets all these parameters from the HTTP response header Set-Cookie. Microsoft, the vendor for Internet Explorer, made a decision that each new window of IE should have its own set of stored cookies that were not shared. Mozilla Firefox and Google Chrome always had a shared cookie jar if I recall correctly.

Along came a Cross-Site Request Forgery (CSRF)…

Jesse Burns from iSecPartners (an NYC-based security consultancy that was acquired by NCC group) back then wrote a paper which I think was the seminal paper on Cross-Site Request Forgery. They called it “XSRF” back then because “XSS” was already in parlance back-then. Thereafter, there were presentations in 2006 about the same by Microsoft. The whole attack was simple. The victim has a browser tab open in which they are logged into a site that has issued that session a cookie value. Due to the browser same-origin policy (a concept that Netscape designed in 1995) that cookie would be resent by the browser in the request as a Cookie HTTP request header whenever an HTTP request was sent to the same domain, protocol (“scheme”) and port. There were few idiosyncracies of IE (which made it infamous back then) such as if the port number did not match IE did not complain and would think that access was allowed per the Same-Origin Policy (SOP). What does that mean? http://example.com and http://example.com:81 would be treated as the same origin! Weird right? It wasn’t the case with other browsers. This was also documented in Michal Zalewski’s book Tangled Web in 2011. Where am I going with this? So while IE did some weird things, it did one good thing – isolate cookie jars. So if you opened up a new window where the attacker ran a payload that sent a request to the site which had handed you a cookie, the new IE window would have no interesting cookies to share with that site – inadvertently protecting the user from being a victim to a CSRF issue. Yes, the hated IE protected the users from being victim to CSRF! Who would have thought? That’s how weird 2005 was 🙂

Fast forward…

Since all browser vendor today have concept of shared cookie jars because who doesn’t like opening new tabs of their favorite cloud consoles without having to re-login right? So what did we the people do? We came up with another attribute that could be added to a Set-Cookie HTTP response header – SameSite attribute which restricted the cookie from being sent unless the request originated from a page on the same site as the cookie issuer.

So there you have it… the history of SameSite and how one of the most hated browsers of the day (IE) did one good thing for users – protect them from CSRF! 🙂