Saturday, September 27, 2014

raspberry-pi remote ARM compiles via VNC viewer

Running raspberry-pi at home, logging in remotely via vncviewer.  A friend had set up a DNS service which my home PC pings, so I can get to the r-pi from anywhere.  Originally I was doing this to use r-pi as my git repository, but now I can use it as a remote ARM compiler from my laptop.





The only snag I ran into was that after installing vncserver on the r-pi: it wasn't obvious that the port the vncserver serves you have to add 5900 (mines running on :1, so log in via 5901).  Set up port forwarding on the router and an ssh connection.  I'm running from OSX and using the Java (jar) vncclient (tightvnc).

This is NOT secure (well allegedly the passwords are encrypted) - I couldn't get the SSH tunneling to work (even though I can establish an ssh connection in a shell), so ideally this should be on some account that can be wide open.  There are also claims you can avoid vnc programs and just use OSX screen sharing.


No VNC Just SSH Commands
Avoiding the vnc and just running commands over ssh requires entering a password every time (so I'd have to password for scp to the server, ssh command and scp results back).  Pretty tedious.  In OSX that approach also isn't helped by setting the .ssh/config to have:

home *
   ControlMaster = auto
   ControlPath = ~/.ssh/master_%r@%h:%p

That's supposed to use an already existing ssh connection for all subsequent connections.  But it wouldn't make an ssh connection at all on OSX.  On PC it did log in, but it didn't help (with the added adventure of generating an AVG warning when it tries to make the connection - but only if you've set up ControlMaster=auto in the ssh/config)!  Ugh!  On PC trying to run an scp after an ssh was already open still asked for another password.  Debian Linux on rpi apparently doesn't support ControlMaster.


TightVNC on PC
When I ran tightvnc on a PC (Windows Vista running from home), it required me to set up a connection via ssh first (I used cygwin).  I'm not sure why the OSX version of tightvnc (via Java) doesn't use the ssh connection, but the compiled PC version looks like it does.


Friday, September 26, 2014

Genetic Algorithms Used to Search Solution Space

I keep losing this article:

http://cacm.acm.org/magazines/2009/11/48443-deep-data-dives-discover-natural-laws/fulltext

Years ago when it was first published, I went through the references and tried to understand how to reproduce the experiment, but got overwhelmed by some of the work.

The idea is that you provide a what I'll call a "vocabulary" - a list of operations - which are then randomly arranged, and that arrangement is scored according to how closely it duplicates a dataset.  Call a single operation a gene, and for a large population of random solutions, propagate genes to next generations to converge on higher scoring solutions.

The difficult work is in optimizing convergence.  Their work is very impressive, and looks like it's completely free for anyone interested in applying it.

http://www.sciencemag.org/content/suppl/2009/04/02/324.5923.81.DC1/1165893s1.mpg

Are you really sure your architecture is optimized?  This is the tool for answering that question.

continued...

Tuesday, September 23, 2014

The Ant Model of Career Building

I recently have gained great interest in the Dunning-Kruger effect.  The oldest reference to this effect might be the story of The Emperor's New Clothes.  The emperor is vain, listens to his advisors too much without doing any work himself, and parades himself naked in front of the whole city while children exclaim "he's not wearing anything!"

The Dunning-Kruger effect is the phenomenon where people assess themselves as being more competent in a given field than they actually are.  They continue to have this high self-assessment even after repeated failures.  They've taken a test, scored low, and even after seeing the test results continue to have a high self-assessment.  The only remedy is when what they should have done is explained to them and they finally admit "I'm not really good at this at all" even though actually, now that it's been explained, they understand the process better than they had before.

This is going to happen at all levels.  I've read a PhD in chemistry's serious paper about biomorphs in nanoparticles - without it ever being mentioned: this work has no practical application.  [She was my last girlfriend.]

So of course I have to sometimes wonder - just how much of an emperor with no clothes am I?  Just how big are my blind spots?  It can be a worrisome question.  And this model is reinforced by almost every corporate culture I've been in to varying degrees.  The worst were psychopathic - where under-performers were harassed into leaving.  I really don't have any answer to this from a top-down perspective.  There will always be psychopaths.  There will always be high performers and low performers.

The only useful model I have to work with right now is what I'll call "The Ant" model.  That means just do the work that's in front of you, without worrying about what other people think, or even making comparisons.  All that little ant has to go by is some little pheromone trail he needs to follow.  It's all very clear and simple.  And life goes by very quickly.

Beside all the worry and internal concerns, the work that is done by any single person is no different than the work done by an ant.  Their lives are really not so different.  I like this model mostly because it's selfless, and liberating.  A happy, liberated ant, is a productive ant.

Monday, June 23, 2014

Updating a new ssh key on an exposed server

I recently realized I had exposed my server's ssh keys.  This was a beginner's blunder.

So the question then becomes how do I securely:

1) change delete the old ssh key (so whoever has access will no longer be able to get in)

2) add a fresh new ssh key


To do this I had to first generate a new ssh key on my client, and make sure my ~/.ssh/config file was appropriately pointing to the new key files.

On the server side, just assume someone is logged in.  Type:

   users  | wc -w

If the result is greater than 1, someone else is logged in.  Even if nobody else is logged in, just assume that at any moment someone may attempt to log in.

To kill all other connections, I rebooted the server.  [Maybe there's a better way?]


After reboot, log back in.

Now edit the /etc/passwd file so that the current user you are logged in as cannot log in again.

Change this line:
   ec2-user:x:500:500:EC2 Default User:/home/ec2-user:/bin/bash

to look like this:
  ec2-user:x:500:500:EC2 Default User:/home/ec2-user:/sbin/nologin


Save the file.  [I do all file editing in vi.]


If this worked correctly, you can verify that attempting to login won't work in another shell.  Also verify that scp no longer works.  If these still work as before, then the method I've documented here is not for you.

Verify only 1 user logged in.

[EDIT.140626] Verify /etc/init.d and /etc/rc.local are unchanged (that is, they aren't starting an unexpected script).

At this point you can edit the server's user account ~/.ssh/authorized_keys file to delete the old (exposed) key, and paste in the new public key you generated earlier.

Once the old key is deleted, it's safe to revert the /etc/passwd file back to the way it was, and log in with the new key.




Sunday, March 2, 2014

How to Safely Update OSX

EDIT2:

As per http://apple.stackexchange.com/questions/103261/what-solutions-exist-to-rectify-a-corrupt-user-account-in-os-x, I've had temporary success running a terminal:

$ sudo su
$ cp  /Users/<corrupteduser>/Libraries/Cache /somebackupdirectory
$ rm -r /Users/<corrupteduser>/Libraries/Cache/*

$ reboot

After it reboots the account could be logged into and used normally, but trying to close all the apps and reboot resulted in the account being corrupt again.  So it's just a temporary solution.


















EDIT:
Right now, all I can recommend is to create a new user account if you find an account which crashes Finder and appears to be corrupt.  None of the other advice on this post appears to work.  I have yet to try always closing all apps before doing an update.  So far, every time I've done an update in Mountain Lion, my account gets corrupted and will eventually crash Finder.  Even changing the corrupted account name doesn't fix it.


Original Post Below:

I'm somewhat still an OSX newb - and after several occurrences of updating the OS (10.8 Mountain Lion in my case) only to find it in an unusable state, here is what I recommend as an update process.


But first, the reasons why:

1) Even after finding that after an update things where in such a bad state that Finder would repeated crash, logging into a different user account worked fine.

2) All apps and working state in the other account was also still in tact after the update.

So I'm concluding that the user account actually gets corrupted (quite regularly) when running an OSX update.


If you haven't already experienced this - good news!  You can still "do the right thing" so you'll be prepared if it ever does happen.  Here's how to prepare:

1) Create another admin account.  Call it OSupdate or similar.  The purpose of this account is to only do updates from this account, and while doing the update, no other applications will be open (which might be what's causing the account corruption I'm seeing).

2) If possible, create a non-admin account which you normally work from.  This isn't really necessary for this process, but it's more secure than running from an admin account every day.  [Of course, some applications require an admin account, so if this is part of your normal workflow, you can just skip this step.]


CONCLUSION:
If you have the misfortune of a bad OSX update, the first thing to try log into a different user account if they are available, to verify that the account isn't corrupted, rather than the whole OSX install.

For more details:
http://apple.stackexchange.com/questions/103261/what-solutions-exist-to-rectify-a-corrupt-user-account-in-os-x


UPDATE:
Following these steps http://support.apple.com/kb/ht1428 to change my corrupted account name (in an attempt of starting over but still having access to old files), just changing the account name seems to have put that account back into a runnable state.  I'll have to run it for a while before I'm confident that's really the case.

Sunday, February 16, 2014

Refactoring is awesome

Today my goal is to read the whole C# book (Programming C# 4.0), in skimming fashion.  I'll miss a lot - but occasionally little nuggets have a way of jumping out.

One of those nuggets is refactoring.

So... sometimes I'm a coding slob.  It's really very embarrassing.  It almost feels like the stack I have in my brain is full, and so I'm not willing to add yet another layer of function calls into the method I'm currently writing, so I wind up with code that looks like this:

void frankslongmethod()
{
    doAction1a();
    doAction1b();
    doAction1c();

    doAction2a();
    doAction2b();
    doAction2c();
}

And you can easily imagine this growing into a hideous monstrosity very quickly.

*** It turns out that Visual Studio and Monodevelop have automated creating functions (and this step is called "refactoring").  You can conveniently select lines of code and right click "refactor" and it will bring up a dialog that lets you name the new function it will generate for you. ***


Afterwards, the code can look like this (depending on what code I selected and how I chose to name it):

void frankslongmethod()
{
   doAction1();

   doAction2();
}

void doAction1()
{
    doAction1a();
    doAction1b();
    doAction1c();
}

void doAction2()
{
    doAction2a();
    doAction2b();
    doAction2c();
}

It's also very slick - the automation process can figure out what arguments need to be passed to the generated functions.  Well worth trying out for yourself.

The beauty of it - my brain stack didn't need to be enlarged.  I just have to take this extra step of cleaning up my code with a refactoring step after I've finished my "write this algorithm as fast as you can" phase.

[And of course we can discuss this is taking an extra call - but that might even be handled by the compiler at run-time, so no worries for now.  If you're hitting that wall, it's a problem that can be addressed - the more serious problem is reducing cognitive load on new programmers trying to understand what the code does.]

Thursday, November 14, 2013

Unity3d Build Target Switch Avoidance with Git

To avoid the time for a build target switch (which going from OSX to iOS is very long), I recommend duplicating the whole git repository in another directory.  Like this:

./MyProject.git.iOS/MyProject

./MyProject.git.OSXtarget/MyProject

You can run a git clone from a remote, or you can do a git clone from your local hard drive:

Franks-MacBook-Pro:MyProject.git.OSXtarget frank$ git clone ../MyProject.git.iOS/MyProject
Cloning into 'MyProject'...
done.
Checking connectivity... done
Checking out files: 100% (3828/3828), done.
Franks-MacBook-Pro:MyProject.git.OSXtarget frankbraker$ ls
MyProject
Franks-MacBook-Pro:MyProject.git.OSXtarget frankbraker$ cd MyProject/
Franks-MacBook-Pro:MyProject frankbraker$ ls
Assets Heapshots ProjectSettings


If your .gitignore is set up as per HowToSetUp.gitignoreForUnity3D the clone will be missing library files, it will take some time when you first open the project to import assets, and some plugin menus may be missing.  I don't know the deep mojo, but I'll religiously close and reopen, close and reopen again, and the plugin menus eventually show up.

The beauty is, if you make changes here, just check them in, push, and you can pull from your other target repository to get those changes, and the target switch time is much faster.

I also noticed push wants to push to your local drive (which didn't work for me).  I had to set up a remote as in the original target's git directory.