PhillipBlanton.com

"Save me, oh God, from people who have no sense of humor."
— Ludlow Porch

Setting up GitHub SSH on Linux

Have you ever run across an issue you remember having solved before, but can't remember how to solve it now; then you Google it, find a great article on it, and realize that you wrote it some years back?

That's what this article is. EVERYTIME I have to set up SSH on a new Linux development machine I end up Googling pieces parts here and there and end up cobbling together the solution myself. Then I move on and some time later, have to set up Github SSH on a new Linux machine again. Well that just happened, so I decided to write it all down so that it doesn't trip me up again.

Before I get started with the dry, terminal commands, let me emphasize that the main thing that trips people up is that Github gives you the HTTPS url by default, and you are going to want to use the SSH URL if you want to use git at the bash prompt.

To begin, fire up the new Linux machine (in this case I'm using Ubuntu Gnome 17.10) and start up a terminal.

Generate your new RSA keypair...

   $ ssh-keygen -t rsa -b 4096

  • It will default to saving the keypair to the .ssh directory in your home directory as "id_rsa". If that's fine, then just accept the default. If you already have an id_rsa keypair, then name it something else. Since this is a test, I called mine "/home/pblanton/.ssh/test_rsa".
  • Enter your passphrase (or leave it blank)
  • Enter it again, then the key will be saved.

You will have two files in ~/.ssh/. One of them will be named according to what you specified. This is your private key. The other one has a ".pub" file extension. Open that one in a text editor and copy its contents to your paste buffer.

  • Log in to GitHub
  • Click on your picture in the top-right corner and select "Settings"
  • On the left, click on "SSH and GPG keys".
  • In the top right corner, click the green [New SSH Key] button.
  • Give it a meaningful title, like "Linux Dev Box" and paste the contents of the public key into the "key" field.
  • Click the green [Add SSH Key] button.

You should be good to go. Clone a GitHub repo to the local machine like this...

Go to a directory where you want to store your repositories

Execute the following command (modified for your own account and repo)...

   $ git clone git@github.com:acctname/reponame.git

You should see the clone procedure run. If it complains about your credentials not being trusted, then have a look at the keys SSH has installed...

Execute the following command 

   $ ssh-add -l

You should see something like...

4096 SHA256:i2fLkp3x3Dy+V3GpnU5IBWFb0wVZoPBvRsYp4aRWwsL /home/pblanton/.ssh/id_rsa (RSA)
4096 SHA256:i2fLkp3x3Dy+V3GpnU5IBWFb0wVZoPBvRsYp4aRWwsL pblanton@ubuntu (RSA)

I have of course, shown fake keys for demo purposes.

If you don't see the expected keys, then run this command...

   $ ssh-add

Type in your passphrase (if any), and the SSH client will ingest your keys. Try cloning again, and it should all work. You should be able to git at the command-line to your heart's content without being prompted for your credentials again.

Time to start writing some shell scripts to automate everything now. Automation is cool!

 

Make a bootable USB drive for ANY bootable ISO from Linux.

So, you have abandoned Microsoft Windows wholesale, choosing to perform most of your work in Linux or OSX. But now you need to make a bootable Windows 10 USB drive in order to set up a laptop for someone. You Google how to create a bootable disk "FROM LINUX" but all you get is different flavors of the same Ubuntu bootable live USB drive from Windows. Occasionally you'll find a tutorial for Linux that references some crappy UI app to do it for you.

That's not necessary.  It's pretty easy to make a bootable flash drive from Linux using ANY bootable ISO you have access to. Here's the command...

sudo dd bs=4M if=blahblahblah.iso of=/dev/sda && sync

Break it down...

  • sudo -> because you have to be a super-user in order to do it.
  • dd -> data dump (dd) is the program we will use.
  • bs=4M -> Be sure to use four-meter long bullshit sticks *
  • if -> input file
  • of -> output file. In this case the flash drive is /dev/sda.
  • && -> chain another command IF the first command succeeds.
  • sync -> flushes the write buffers to ensure that the write operation is complete before you yank the drive.

* just kidding. bs is the BlockSize switch. We're telling dd to copy the data in 4Megabyte blocks. If you're on a Mac, be sure to use a lower-case "m".

If your flash drive partition is  "/dev/sda1" then be sure to use just "/dev/sda". You want to write the iso file contents to the raw drive and not as a separate partition. 

Now you'll NEVER need Windows and Rufus ever again.

Getting a New Kali 17.2 Install to apt update...

I recently installed Kali Linux 17.2 to a VMWare VM. During the installation I was prompted for a mirror to use to update packages. All of the mirrors available returned an error that they were invalid, so I had to finish the installation as a "minimal installation" like the installer warned me against; but I was stuck with either abandoning the installation and starting over from scratch, which wouldn't guarantee a better outcome, or just go with it and try to figure it out later.

After the installation finished and I booted into my fresh new Kali, I went to " apt update; apt upgrade -y " and found that it wouldn't. I checked my /etc/apt/sources.list file and it contained this...

No online sources, and the CDROM source lines are commented out, so every time I ran " apt update ", the system was content that it needed no updates.

I updated the sources.list file to...

And in plain text, for your cutty/pastie pleasure ...

deb http://http.kali.org/kali kali-rolling main contrib non-free
# For source package access, uncomment the following line
# deb-src http://http.kali.org/kali kali-rolling main contrib non-free

Save the file, and then re-run " apt update; apt upgrade -y ". It'll take a while to update/upgrade depending on the size of your pipe; and then you will be all updated and able to install stuff.

Hack with reckless abandon!

Fixing The Crazy High Resolution of a VMWare Guest on a 4k Laptop Display

I have an 15" HP Spectre X360 laptop with a 4K display. Whenever I run a VMWare guest OS. it always upscales to 4k which on a laptop, makes the text much too small to read. I have gone through a number of different approaches that many people say fixes the issue, but haven't had any luck until today. Here's how I fixed it...

In VMWare Workstation, click "Edit | Preferences" then select the "Display" preferences.

Under "Autofit", de-select "Autofit window" and "Autofit guest".

Under "Full Screen", make sure "Stretch Guest (no resolution change)" is selected.

See below...

After that, start up your guest OS, and set the resolution using the display settings built into the guest OS. Then when you go full-screen, the guest OS will fill the screen as best it can, without the host passing on a resolution change message to it.

That was it for me. 

Note: Using the "View" menu on VMWare to resize, autosize, or fit the guest may make it all crazy again. When that happens, I find that if I manually specify the resolution in the guest again, then shut the guest down and restart it, making sure I ONLY stretch it to full screen, I can overcome it again.

There really needs to be a setting in VMWare that allows me to ONLY increase the resolution of a guest OS UP TO a specified max. It should assume the full display is that max, and then scale the guest appropriately inside of those boundaries. Why this is so hard for VMWare engineers must be because they aren't using the latest laptops with 4K displays. I just upgraded to VMWare Workstation 14 Pro this morning in an attempt to fix this issue and it is still persistent.

Good luck!

Deploying a website to AWS CloudFront with SSL/TLS and AutoRedirect to https.

Today I deployed a website I'm working on, to AWS CloudFront and enabled SSL/TLS using a free Amazon certificate with a correctly configured https redirect. To see it in action, hit these links...

You shouldn't be able to view anything on http. All insecure requests should automatically redirect to https. Examining the certificate will reveal that it is a valid Amazon SSL cert.

The site is running server-less using Amazon S3 and CloudFront. It should be a very cost-effective way to deploy a simple business website that may need to scale, without purchasing hardware and infrastructure and the associated costs.

If you want to use Node.js and Lambda, start with a simple HTML/Javascript site configured as follows, then add in the Lambda/Node.js functionality. I'm a big fan of accomplishing complex tasks in baby steps. For now the site is pure Javascript/HTML so there is no overly complex Lambda configuration involved. Let me show you how I did it. I presume you have a Javascript/HTML web application and a registered domain name to point to it. The domain name I used, "gort.co" is registered with GoDaddy, so I'll cover switching the DNS from GoDaddy over to AWS Route 53 while leaving the name registered with GoDaddy; but if you're getting a new domain, you can register it with AWS Route 53 to simplify things if you wish. I also like NameCheap.com

If I've made any mistakes or important omissions, please comment below and I'll fix it.

Configuring your AWS S3 Bucket:

Log in to your Amazon AWS Console. If you don't have an Amazon account (dude, really?), you can create one and then sign up for the AWS Free Tier. You'll get basic services for free for one year. Read about it here. The free tier gives you up to 5GB of S3 storage which should be plenty for your website.

In the AWS Console...

  • Select the [Services V] button in the top-left corner.

  • Under the Storage section, select "S3"

  • In the S3 console, select [Create Bucket]
  • Give the new bucket a name (no dots and no capitalized characters) and save.
  • Click on the new bucket, select the Options tab and click the [Upload] button.
  • Drag your website files onto the Upload dialog and click [Upload].
  • After the files have finished uploading, back in the bucket details window, select the Properties tab. Click "Static Web Hosting" and configure it as follows...
    • Select "Use this bucket to host a website"
    • Configure the Index Document and Error Document to point to the respective documents. Mine is configured like this...

  • At the top of the Static Website Hosting window is a link to the endpoint.

    Copy that into your paste buffer, then click Save.
  • You should have a website up and running at the endpoint mentioned above. Browse to it to make sure it works. If not, fix any issues with your web files so that the site runs.

Configuring CloudFront:

In order to support SSL/TLS and our custom domain, we need to use CloudFront. There are ways to configure a custom domain with just the S3 bucket, but CloudFront makes it easy to configure it all with Route53 and an Amazon-issued SSL/TLS cert, so we'll use that.

  • In the AWS Console, select "Services V" again, and this time click on CloudFront, under Networking & Content Delivery.

  • If you already have a distribution created for your new S3 bucket, then click on the distribution's ID. If you don't have one, then click on [Create Distribution] and [Get Started] under "Web".

  • Under Origin Domain Name, type in the S3 bucket's endpoint URL (minus the http:// part).

  • Under Default Cache Behavior Settings, leave everything set to default values.
  • Under Distribution Settings | Alternate Domain Names, enter your domain name(s). I wanted mine to work with the naked version as well as the "www" sub, so I set it like this.

  • Under Distribution Settings | SSL Certificate, choose "Custom SSL Certificate" and click the button [Request or Import a Certificate with ACM].
  • In the SSL Cert Request form, be sure to add the naked form of your domain name (without the "www") if you want users to be able to hit your site securely without any sub-domain. Here's how I configured mine...

  • Click [Review and Request] and follow the steps to get the cert approved and issued.
    • An email will be sent to the domain name owner of record. You (or that person) will need to approve the request in each email in order to get an SSL cert issued.
    • If you get multiple emails, then you will need to approve each one before the cert will be issued.
  • Back to the Create Distribution dialog, under Custom SSL Client Support, be sure NOT to select "All Clients" unless you absolutely need to serve clients using IE on Windows XP, or anyone on a very old version of Android on old hardware, and are willing to spend $600 per MONTH for the privilege.

  • Under Default Root Object, enter your default document. Mine is "index.html".
  • Enter a comment for the CloudFront entry under Comment.
  • Accept the rest of the defaults and click the [Create Distribution] button.

Configuring auto http to https redirect:

Now we'll configure the auto-redirect from http to https.

  • In your CloudFront distribution list, click on the new distribution for your website.
  • Click the "Behaviors" tab.
  • You should already have a behavior for the Default (*) pattern. Select it by clicking the check box on the left, and click the [Edit] button.
  • Under the Viewer Protocol Policy, select Redirect HTTP to HTTPS...

  • Leave the rest of the settings alone, scroll to the bottom and select [Yes, Edit].

Lets test it before we move on to configuring the domain name under Route 53...

On the General tab, under the CloudFront Distribution for your website, copy the value of "Domain Name" into your paste buffer.

Try navigating to it. You should be able to hit it at both http and https, but when you hit the http version, it will auto-redirect to https for you. If that all works, then you're ready to configure Route 53.

Configuring your domain name with Route 53.

My domain name (gort.co) is registered with GoDaddy so this tutorial will be configuring the Gort's DNS to use Amazon's Route 53 servers instead of GoDaddy's and configuring the Route 53 DNS for my CloudFront site.

  • Go to the AWS Console, and select "Services V" again. This time, under Networking & Content Delivery, select "Route 53".

  • In the Route 53 console (you might have to "Get Started") Go into Hosted Zones and click the [Create Hosted Zone] button. Fill out the "Create Hosted Zone" dialog as follows...

  • Create the following record sets...

    The A and AAAA records are aliases that point to the CloudFront domain you tested before setting up the DNS.
  • Make a note of your name servers, and go log into GoDaddy's client console.

I'm getting an error on GoDaddy's console right now. Apparently they're having issues and I can't use their DNS management system at this time. I'll come back and edit this to show the steps later, but for now suffice to say you need to edit your GoDaddy DNS to use custom name-servers, then add in each of the nameservers specified under NS above. Be sure to use yours, not the ones assigned to me in the image above.

Give it a few minutes for GoDaddy's DNS and the new Route 53 DNS settings to propagate before testing it. If it doesn't seem like it's propagating fast enough, you can flush the DNS cache on your machine in order to force the issue. These commands will flush your system's DNS cache, forcing it to query the DNS system for the latest information on the requested domain name. Depending on the DNS Servers your system is using, this may force DNS propagation to happen faster.

Windows:
At a windows command prompt, type
    ipconfig -flushdns
then restart your browser.

Linux:
There are a number of different ways to skin a penguin, depending on the version/distribution you're using. Try one of these...
    # sudo /etc/init.d/named restart
or
   # sudo rndc restart
or
   # sudo rndc exec

Apple:
At a command terminal, type
    sudo dscacheutil -flushcache
then restart your browser.

Cyber-Security Talent Shortage.

An article published in CSOOnline back in September of 2016, stated that unemployment in the Cyber-Security field was zero percent, and that there were over 1 million un-filled jobs with nobody chasing them. Now I may be a bit over-critical, but isn't that the textbook definition of something of a NEGATIVE unemployment rate?

Many experts are saying that we are currently sitting on a -5% unemployment rate in the Cyber-Security world and expect the shortage in qualified candidates to grow to upward of 3.5 million by 2020.

http://www.csoonline.com/article/3200024/security/cybersecurity-labor-crunch-to-hit-35-million-unfilled-jobs-by-2021.html

Some think that part of the problem is companies trying to "hire a unicorn" by writing job descriptions with cross-cutting requirements that no single person is ever going to have; hence recruiters are unable to find anyone who's qualified.

https://securityintelligence.com/news/cybersecurity-talent-shortage-zero-unemployment-no-unicorns/

I keep getting calls from recruiters trying to place cyber-security experts in cubicles. Some are offering relocation packages and some are not. One client was willing to let a good candidate work remotely as long as they were willing to spend one week each month traveling to the client's offices in Northern Virginia... AT THE EMPLOYEE'S OWN EXPENSE.  :-/

Some employers are waking up and realizing that they must pay more than they are used to for cyber talent, AND allow them to work flexibly... meaning REMOTELY if possible. Whenever I hear of a hiring manager who says, "We need the best and are willing to pay $120k/yr. or $60/hr. W2 for the right person, and it is 100% onsite only!"

What they are really saying is,

"We want the best but aren't willing to pay for it, oh and limit your search to a 20-mile radius from our office. Be sure to tell the candidates how lucky they are to be considered by us, 'cause we're great. Oh... and we won't pay for relocation and it's a six-month contract, meaning the applicants get the pleasure of uprooting themselves from their current location at their expense for the privilege of working on a contract at a rate far below what they're worth and when we're done with them we're kicking them to the curb."

When you push back a little, the hiring managers say, "This is a technical resource, level 3 position and that's all our rate card pays for a position like this."  Um... You aren't in the driver's seat hiring manager. The ball isn't in your court. Your rate card is not calibrated with the reality of the services you seek.

A friend bought a new pickup recently and the dealer wanted $55,000 for it. The friend told them that this is only a pickup and not a Mercedes, and that his rate card only allows up to $28,000 for pickups; but they just wouldn't let him take it home until he gave them $55,000 for it.

Is it any wonder these positions are languishing with no viable applicants?

Current roadblocks are...

  1. There just aren't as many cyber-security experts as are desperately needed. Each reported cyber-attack or data-breach represents only a small percentage of the actual activity, and creates more demand for experts to help mitigate the issue.
  2. Universities can't graduate cyber-security experts fast enough and even if they could, a freshly-minted undergrad doesn't have the requisite experience.
  3. Existing hiring practices are woefully inadequate to address the problem.
  4. Managers are unwilling to pay cyber-security experts the salaries necessary to lure them away from their current positions. In many cases this will be an amount far above what the manager himself makes.
  5. Most people are unwilling to relocate in order to take a job that can very easily be done remotely. 
  6. We are living in a time of huge disruption. Hiring managers don't understand the need, or the talent necessary to service the need. The talent pool being mostly millennials, just won't accede to warming a cubicle in a client's building for eight to ten hours a day, on top of two hours of commute time. Those days are gone.

The rules aren't changing... they HAVE changed. Read The Year Without Pants and get with the program.

 

Enabling the audio controls in Chrome's tabs.

Some web pages start playing ads and bellowing at you when you load them. I HATE it. On Chrome I have to right click the tab and select "Mute tab". The result is that the little speaker icon on the offending tab shows a slash across it.

To allow the tab to play again, you have to right click it, and select "un-mute tab".

It sucks. I hate it and I wish muted tabs were the default. Short of that, this is a quicker way to shut the tabs up...

Use the Mute Tab Shortcuts extension.

    1. Type chrome://flags/#enable-tab-audio-muting into your address bar and press enter.
    2. Click Enable and restart Chrome.

Now, when you encounter a lousy, people hating noisy tab, you can just click the audio icon on the offending tab to mute or un-mute it.

mute icon

Reversing the mouse wheel scroll on Windows 10.

Update 5/2/2017: My Windows 10 computer took a new, large update yesterday called the "Windows 10 Creator's Update", and it broke this fix. My mouse now scrolls in the un-natural Windows way again. After checking the registry, the FlipFlopWheel parameter had indeed been switched back to 0 because of the update. This isn't OK Microsoft. I had specifically set that value so that my mouse scrolls the way I want it to. The only way that value is going to be set to something other than 0, is that the user set it that way. For you to come in with your updates and break the user defined functionality is certainly NOT OK! 

Update 8/31/2017: I have since installed two more large Microsoft updates and they each also reset my mouse scroll wheel direction to THEIR preference. I know it's a small thing, but it underscores Microsoft's lack of respect for their user base. There is literally no reason why Microsoft keeps resetting a value that the user has specifically set, whenever they install updates.

I am used to macs now and I have come to like the reversed mouse scroll wheel setting that they use. When I have to use a Windows or Linux machine, I always have to "fix" the mouse scroll because it drives me crazy.

Here's how to fix it on Windows. Copied from Volker Voecking's blog where he shows how to do it on Windows 7. Luckily it still works on Windows 10...

  1. Find the hardware ID of the mouse

    • Go to the mouse control panel
    • Select “Hardware” tab
    • Click “Properties” button
    • Select “Details” tab
    • From the drop-down list choose “Hardware IDs”
    • Save the VID*** entry ( e.g. VID_045E&PID_0039 )

  2. Find and change the corresponding configuration settings in the registry

    • Run regedit.exe
    • Open Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\HID
    • Here you should find an entry for the hardware ID of your mouse
    • In all sub-keys of the hardware id key look for the “DeviceParameters” key and change the “FlipFlopWheel” value from 0 to 1

  3. Make it work

    • Unplug the mouse
    • Count to five :-)
    • Plug the mouse back in

For Linux... 

I use Ubuntu Gnome and this works for me. Different distros / desktops may require different instructions. Good luck!

Create a file in your home directory called ".Xmodmap"

    • Run a terminal
    • Type cd ~ to get to your home directory if you're not already there.
    • execute the following command
      sudo gedit .Xmodmap
    • Type your password if it is required
    • *Type the following line in the text file...
      pointer = 1 2 3 5 4 6 7 8 9 10 11 12
    • Save the file
    *Note that the 5 and 4 are reversed in the number list above. This is what flips the scroll wheel's direction.

Unplug the mouse for five seconds and then plug it back in.

Expecting Professionalism

This is a great presentation by Robert C. Martin. If you care about doing software development right, then watch...

Notes:

  1. We Will Not Ship Shit!
  2. We Will Always Be Deployable after each sprint.
  3. Stable Productivity.
  4. Inexpensive Adaptability - Easy change.
  5. Continuous Improvement over time.
  6. Fearless Competence thanks to unit tests.
  7. Extreme Quality with consistent issue tracking.
  8. Don't Dump On QA.
  9. No fragile system components.
  10. Cover For Each Other. Make one's self replaceable.
  11. Give honest estimates
  12. Say "No" constructively
  13. Continuous Aggressive Learning
  14. Mentoring - Perpetual Inexperience.

Shame on you CIA

Cisco recently announced a vulnerability in 300 OF THEIR SWITCH MODELS in the recent Wikileaks Vault 7 dump. Apparently the CIA discovered the vulnerability and created an exploit for it for their own nefarious purposes, rather than informing Cisco so they could fix it.

http://thehackernews.com/2017/03/cisco-network-switch-exploit.html

Those of you who blindly trust your government to "keep you safe", there you go. There should be sanctions levied against the CIA for this clear violation of public trust. There won't be though.

If you're using Cisco switches, you should disable telnet immediately and keep it disabled until further notice. Cisco will be pushing out the updates as soon as possible.