Skip to content

rtanglao/rt-csp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

rt-csp

roland's fun CSP for lithium repo

24April2017 repeat test from 22March2017

cp  ~/Downloads/14april2017-mozilla.prod-csp-sanitized-report.csv . # 1
tr -d '\r' < 14april2017-mozilla.prod-csp-sanitized-report.csv \ # 2
> unix-line-endings-14april2017-mozilla.prod-csp-sanitized-report.csv
./print-domain.rb  unix-line-endings-14april2017-mozilla.prod-csp-sanitized-report.csv \ #3
2>14april2017-stderr-mozilla-domains.txt >14april2017-stdout-mozilla-domains.txt
cat 14april2017-stdout-mozilla-domains.txt | sort | \ #4
uniq > 14april2017-unique-mozilla-domains.txt
grep FIELD3 14april2017-stderr-mozilla-domains.txt | sort \ #5
| uniq > 14april2017-stderr-non-http-non-https-field2.txt
grep URI 14april2017-stderr-mozilla-domains.txt | sort | uniq #6
/Users/rtanglao/.rbenv/versions/2.3.0/lib/ruby/2.3.0/uri/rfc3986_parser.rb:67:in `split': bad URI(is not URI?): http://support.mozilla.org/skins/2360640/fonts/bootstrap/glyphicons-halflings-regular%woff (URI::InvalidURIError)
PublicSuffix::DomainNotAllowed^^^ URI:nikkomsgchannel
PublicSuffix::DomainNotAllowed^^^ URI:s3.amazonaws.com
PublicSuffix::DomainNotAllowed^^^ URI:s3.eu-central-1.amazonaws.com
  • 2. copy the old good bad file and hand edit 14april2017-mozilla-good-bad-domains.md using emacs ediff of 14april2017-unique-mozilla-domains.txt versus unique-mozilla-domains.txt
cp mozilla-good-bad-domains.md 14april2017-mozilla-good-bad-domains.md

22March2017

  • 1. A better approach is to parse the CSV file into an array
  • 2. foreach 2nd element of the array, find the domain and the print to stdout
  • 3.then pipe to uniq
  • 4.sketch to get URI
require 'rubygems'
require 'ccsv'

Ccsv.foreach(file) do |values|
  puts values[2] # values[2] to get the URI
end
  • 5.sketch to get domain from URI
uri = URI.parse("https://support.mozilla.org/t5/user/viewprofilepage/user-id/873432")
=> #<URI::HTTPS https://support.mozilla.org/t5/user/viewprofilepage/user-id/873432>
irb(main):006:0> domain = PublicSuffix.parse(uri.host)
domain.domain
=> "mozilla.org"
  • 6. get rid of the DOS line endings
tr -d '\r' < mozilla.prod-csp-sanitized-report.csv \
> unix-line-endings-mozilla.prod-csp-sanitized-report.csv
  • 7. get all the domains
./print-domain.rb  unix-line-endings-mozilla.prod-csp-sanitized-report.csv \
2>stderr-mozilla-domains.txt >stdout-mozilla-domains.txt
  • 8. get the unique domains
cat stdout-mozilla-domains.txt | sort | \
uniq > unique-mozilla-domains.txt
  • 9. get non HTTP and non HTTPS field2
grep FIELD3 stderr-mozilla-domains.txt |sort | \
uniq > stderr-non-http-non-https-field2.txt
  • 10. get Public Suffix bad domains (not sure why Public Suffix isn't happy with http://s3.amazonaws.com perhaps because it should be https://s3.amazonaws.com ?!?!)
rtanglao13483:rt-csp rtanglao$ grep URI stderr-mozilla-domains.txt 
PublicSuffix::DomainNotAllowed^^^ URI:s3.amazonaws.com
PublicSuffix::DomainNotAllowed^^^ URI:s3.amazonaws.com
  • 11. now make a file with all the domains, start with unique-mozilla-domains.txt and manually add http://s3.amazonaws.com and stderr-non-http-non-https-field2.txt
cat unique-mozilla-domains.txt stderr-non-http-non-https-field2.txt > mozilla-good-bad-domains.md
echo "http://s3.amazonaws.com" >> mozilla-good-bad-domains.md

20March2017

working on case 00134461 which is referenced in CSP bug 1339940 as well as HSTS bug 1340056

grep -v "[0-9a-z]*.addthis.com[0-9a-z]*,,,," mozilla.prod-csp-sanitized-report.csv \
> addthis.com-removed-mozilla.prod-csp-sanitized-report.csv
grep -v "[0-9a-z\/:]*.support.mozilla.org[-A-Z0-9a-z\/]*,,,," \
addthis.com-removed-mozilla.prod-csp-sanitized-report.csv > \
support.mozilla.org-removed-mozilla.prod-csp-sanitized-report.csv
grep -v "[0-9a-z\/:]*.youtube.com[-A-Z0-9a-z_=\/\&\?]*,,,," \
support.mozilla.org-removed-mozilla.prod-csp-sanitized-report.csv > \
youtube.com-removed-mozilla.prod-csp-sanitized-report.csv
grep -v "[0-9a-z\/:]*.addthisedge.com[-A-Z0-9a-z_=\/\&\?]*,,,," \
youtube.com-removed-mozilla.prod-csp-sanitized-report.csv > \
addthisedge.com-removed-mozilla.prod-csp-sanitized-report.csv
ggrep -Pv ",,[a-z\:\/\.]*support\.mozilla\.org[A-Z0-9a-z_=\/\&\?\-\%\.]*,,,," \
addthisedge.com-removed-mozilla.prod-csp-sanitized-report.csv > \
unicodesupport.mozilla.org-removed-mozilla.prod-csp-sanitized-report.csv
ggrep -Pv ",,[0-9a-z\:\/\.]*addthis\.com[A-Z0-9a-z_=\/\&\?\-\%\.]*,,,," \
unicodesupport.mozilla.org-removed-mozilla.prod-csp-sanitized-report.csv > \
reallyaddthis.com-removed-mozilla.prod-csp-sanitized-report.csv
ggrep -Pv ",,[-0-9a-z\:\/\.]*mxpnl\.net[A-Z0-9a-z_=\/\&\?\-\%\.]*,,,," \
reallyaddthis.com-removed-mozilla.prod-csp-sanitized-report.csv > \
mxpnl.net-removed-mozilla.prod-csp-sanitized-report.csv
ggrep -Pv ",,[-0-9a-z\:\/\.]*addthisedge\.com[A-Z0-9a-z_=\/\&\?\-\%\.]*,,,," \
mxpnl.net-removed-mozilla.prod-csp-sanitized-report.csv > \
reallyaddthisedge.com-removed-mozilla.prod-csp-sanitized-report.csv
grep -v vpaid.js \
reallyaddthisedge.com-removed-mozilla.prod-csp-sanitized-report.csv >\
vpaid.js-removed-mozilla.prod-csp-sanitized-report.csv
  • 28. This (remove vpaid.js) removes about 150 lines
  • 29. First line of vpaid.js-removed-mozilla.prod-csp-sanitized-report.csv: about:blank,font-src,data,https://s7.addthis.com,2,3550,5
  • 30. Remove more references to addthis
grep addthis -v \
vpaid.js-removed-mozilla.prod-csp-sanitized-report.csv > 
\really-really-addthis.com-removed-mozilla.prod-csp-sanitized-report.csv
grep -v stickyads \
really-really-addthis.com-removed-mozilla.prod-csp-sanitized-report.csv >\
stickyadstv.com-removed-mozilla.prod-csp-sanitized-report.csv

About

roland's fun CSP for lithium repo

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages