Friday, December 26, 2014

The Anti-Learning Pattern

If learning was just the acquisition of knowledge, it wouldn't explain how news can affect people so differently.  Thus the necessity for framing "learning" in very precise terms, to allow for the idea of "anti-learning".


The Learning Pattern

We can think of learning in terms of process and need.  For example, a basic need might be hunger.  So any process which allows meeting that need which is new to the person hearing about it can be called "learning".

This way learning becomes a general game design pattern.  The problem the pattern addresses is any need.  The solution is a process to meet that need.


The Anti-Learning Pattern (Coping/Distraction)

The opposite of learning (yet similar enough to be misconstrued as valuable) is anti-learning.  Let's pretend it's the same problem: hunger.  Now if we provide a process which doesn't address the hunger, but instead distracts the person from their hunger, that process is anti-learning.  In short, a coping mechanism or distraction.

[I'm not going to get into situations where someone is obese and their hunger is habituated - that's a different health need pattern where the solution process would involve spending time with hunger, diet, and exercise.]


The point of even discussing an anti-learning pattern is that all knowledge is not beneficial.  Unless it addresses a need, it actually creates distraction.  Or it may benefit one party while exploiting another.  So there may be an interesting game dynamic available in this pattern.


Sunday, November 30, 2014

The Anti-Wiener Pattern

Wiener

Have you heard of the "wiener" game design pattern?  It involves making a possible future state of the game visible to the player.  For example equipment that costs a 100,000 tokens, but your character only has 10 tokens - this is a wiener.  It shows "if you play long enough, you'll be able to afford this cool thing".

Another example is the Disneyland castle, which draws people to walk in that direction because they want to see the castle up close.


Sword Of The Stars II and Over-budget Technology Research

Over the weekend I bought a 4x game (called Sword of the Stars II).  The game is somewhat unfinished, had lots of bad reviews, but it was on sale and had enough youtube video evidence that I'd probably like it, so I took the risk and bought it.

Like most 4x games, you research technologies.  Like some 4x games (Master of Orion), research can finish ahead of schedule.  To describe research progress, a progress bar shows 0 to 100%.  You could hover over the progress bar to see how many turns before it would reach 100%.  At 100% a notice would come up that the technology was now available, and sometimes this happened early.

Fine.  Everything is just as I've seen before in other 4x games.

What I didn't know, and which the UI didn't reflect at all, was that some research could go over-budget!  So I'm playing along innocently minding my own business, and eventually see the progress bar fills to 100% and no new technology is available.  Nada!  I think "oh just an off-by-one bug" and go to the next turn.  Still nothing!  2 more turns!  Nope!  I was completely convinced the game had a bug in it which wouldn't let some technologies be unlocked - which essentially was going to shelve the game for me.

What a rip-off!

After consulting the forums - I got a gentle rejoinder: "RTFM".  Now whenever I hear "RTFM" - I think: "that's fine if we're talking about programming."  RTFS.  Okay I'll read the source code.  I'll do whatever work it takes to get the job done.

But this is a game.  I'm not being paid to do this.  Someone else was paid to make this experience "fun" - and instead all I got was that "what a rip-off!" feeling.

So this feeling of "what a rip-off" led to the idea of an anti-wiener.


Anti-wiener

With an anti-wiener, the game has hidden a future possible state from the player, so that when that state comes up, the player is surprised and traumatized (especially if they feel they just bought a broken game).  This is probably unintentional, unless you really hate your players.

In this case, the progress bar implied that 100% was the edge condition for research, when actually the edge was beyond 100%.  The design for SotS2 is actually 200%: an over-budget project could take up to 200% before completion, and at 200% that technology would be considered unavailable (i.e. just too hard).


[The solution I that finally came to me was the edge condition should always be clearly shown in the UI.  The bar should really show 200% as the full scale, with 100% being somewhere in the middle, and what players would usually see when doing research.  This becomes a wiener pattern again, because the player will wonder "can research go beyond 100%?  It looks like it can with the progress bar."  Then there's no surprise and no RTFM - it's an intuitive design again.]



Saturday, September 27, 2014

raspberry-pi remote ARM compiles via VNC viewer

Running raspberry-pi at home, logging in remotely via vncviewer.  A friend had set up a DNS service which my home PC pings, so I can get to the r-pi from anywhere.  Originally I was doing this to use r-pi as my git repository, but now I can use it as a remote ARM compiler from my laptop.





The only snag I ran into was that after installing vncserver on the r-pi: it wasn't obvious that the port the vncserver serves you have to add 5900 (mines running on :1, so log in via 5901).  Set up port forwarding on the router and an ssh connection.  I'm running from OSX and using the Java (jar) vncclient (tightvnc).

This is NOT secure (well allegedly the passwords are encrypted) - I couldn't get the SSH tunneling to work (even though I can establish an ssh connection in a shell), so ideally this should be on some account that can be wide open.  There are also claims you can avoid vnc programs and just use OSX screen sharing.


No VNC Just SSH Commands
Avoiding the vnc and just running commands over ssh requires entering a password every time (so I'd have to password for scp to the server, ssh command and scp results back).  Pretty tedious.  In OSX that approach also isn't helped by setting the .ssh/config to have:

home *
   ControlMaster = auto
   ControlPath = ~/.ssh/master_%r@%h:%p

That's supposed to use an already existing ssh connection for all subsequent connections.  But it wouldn't make an ssh connection at all on OSX.  On PC it did log in, but it didn't help (with the added adventure of generating an AVG warning when it tries to make the connection - but only if you've set up ControlMaster=auto in the ssh/config)!  Ugh!  On PC trying to run an scp after an ssh was already open still asked for another password.  Debian Linux on rpi apparently doesn't support ControlMaster.


TightVNC on PC
When I ran tightvnc on a PC (Windows Vista running from home), it required me to set up a connection via ssh first (I used cygwin).  I'm not sure why the OSX version of tightvnc (via Java) doesn't use the ssh connection, but the compiled PC version looks like it does.


Friday, September 26, 2014

Genetic Algorithms Used to Search Solution Space

I keep losing this article:

http://cacm.acm.org/magazines/2009/11/48443-deep-data-dives-discover-natural-laws/fulltext

Years ago when it was first published, I went through the references and tried to understand how to reproduce the experiment, but got overwhelmed by some of the work.

The idea is that you provide a what I'll call a "vocabulary" - a list of operations - which are then randomly arranged, and that arrangement is scored according to how closely it duplicates a dataset.  Call a single operation a gene, and for a large population of random solutions, propagate genes to next generations to converge on higher scoring solutions.

The difficult work is in optimizing convergence.  Their work is very impressive, and looks like it's completely free for anyone interested in applying it.

http://www.sciencemag.org/content/suppl/2009/04/02/324.5923.81.DC1/1165893s1.mpg

Are you really sure your architecture is optimized?  This is the tool for answering that question.

continued...

Tuesday, September 23, 2014

The Ant Model of Career Building

I recently have gained great interest in the Dunning-Kruger effect.  The oldest reference to this effect might be the story of The Emperor's New Clothes.  The emperor is vain, listens to his advisors too much without doing any work himself, and parades himself naked in front of the whole city while children exclaim "he's not wearing anything!"

The Dunning-Kruger effect is the phenomenon where people assess themselves as being more competent in a given field than they actually are.  They continue to have this high self-assessment even after repeated failures.  They've taken a test, scored low, and even after seeing the test results continue to have a high self-assessment.  The only remedy is when what they should have done is explained to them and they finally admit "I'm not really good at this at all" even though actually, now that it's been explained, they understand the process better than they had before.

This is going to happen at all levels.  I've read a PhD in chemistry's serious paper about biomorphs in nanoparticles - without it ever being mentioned: this work has no practical application.  [She was my last girlfriend.]

So of course I have to sometimes wonder - just how much of an emperor with no clothes am I?  Just how big are my blind spots?  It can be a worrisome question.  And this model is reinforced by almost every corporate culture I've been in to varying degrees.  The worst were psychopathic - where under-performers were harassed into leaving.  I really don't have any answer to this from a top-down perspective.  There will always be psychopaths.  There will always be high performers and low performers.

The only useful model I have to work with right now is what I'll call "The Ant" model.  That means just do the work that's in front of you, without worrying about what other people think, or even making comparisons.  All that little ant has to go by is some little pheromone trail he needs to follow.  It's all very clear and simple.  And life goes by very quickly.

Beside all the worry and internal concerns, the work that is done by any single person is no different than the work done by an ant.  Their lives are really not so different.  I like this model mostly because it's selfless, and liberating.  A happy, liberated ant, is a productive ant.

Monday, June 23, 2014

Updating a new ssh key on an exposed server

I recently realized I had exposed my server's ssh keys.  This was a beginner's blunder.

So the question then becomes how do I securely:

1) change delete the old ssh key (so whoever has access will no longer be able to get in)

2) add a fresh new ssh key


To do this I had to first generate a new ssh key on my client, and make sure my ~/.ssh/config file was appropriately pointing to the new key files.

On the server side, just assume someone is logged in.  Type:

   users  | wc -w

If the result is greater than 1, someone else is logged in.  Even if nobody else is logged in, just assume that at any moment someone may attempt to log in.

To kill all other connections, I rebooted the server.  [Maybe there's a better way?]


After reboot, log back in.

Now edit the /etc/passwd file so that the current user you are logged in as cannot log in again.

Change this line:
   ec2-user:x:500:500:EC2 Default User:/home/ec2-user:/bin/bash

to look like this:
  ec2-user:x:500:500:EC2 Default User:/home/ec2-user:/sbin/nologin


Save the file.  [I do all file editing in vi.]


If this worked correctly, you can verify that attempting to login won't work in another shell.  Also verify that scp no longer works.  If these still work as before, then the method I've documented here is not for you.

Verify only 1 user logged in.

[EDIT.140626] Verify /etc/init.d and /etc/rc.local are unchanged (that is, they aren't starting an unexpected script).

At this point you can edit the server's user account ~/.ssh/authorized_keys file to delete the old (exposed) key, and paste in the new public key you generated earlier.

Once the old key is deleted, it's safe to revert the /etc/passwd file back to the way it was, and log in with the new key.




Sunday, March 2, 2014

How to Safely Update OSX

EDIT2:

As per http://apple.stackexchange.com/questions/103261/what-solutions-exist-to-rectify-a-corrupt-user-account-in-os-x, I've had temporary success running a terminal:

$ sudo su
$ cp  /Users/<corrupteduser>/Libraries/Cache /somebackupdirectory
$ rm -r /Users/<corrupteduser>/Libraries/Cache/*

$ reboot

After it reboots the account could be logged into and used normally, but trying to close all the apps and reboot resulted in the account being corrupt again.  So it's just a temporary solution.


















EDIT:
Right now, all I can recommend is to create a new user account if you find an account which crashes Finder and appears to be corrupt.  None of the other advice on this post appears to work.  I have yet to try always closing all apps before doing an update.  So far, every time I've done an update in Mountain Lion, my account gets corrupted and will eventually crash Finder.  Even changing the corrupted account name doesn't fix it.


Original Post Below:

I'm somewhat still an OSX newb - and after several occurrences of updating the OS (10.8 Mountain Lion in my case) only to find it in an unusable state, here is what I recommend as an update process.


But first, the reasons why:

1) Even after finding that after an update things where in such a bad state that Finder would repeated crash, logging into a different user account worked fine.

2) All apps and working state in the other account was also still in tact after the update.

So I'm concluding that the user account actually gets corrupted (quite regularly) when running an OSX update.


If you haven't already experienced this - good news!  You can still "do the right thing" so you'll be prepared if it ever does happen.  Here's how to prepare:

1) Create another admin account.  Call it OSupdate or similar.  The purpose of this account is to only do updates from this account, and while doing the update, no other applications will be open (which might be what's causing the account corruption I'm seeing).

2) If possible, create a non-admin account which you normally work from.  This isn't really necessary for this process, but it's more secure than running from an admin account every day.  [Of course, some applications require an admin account, so if this is part of your normal workflow, you can just skip this step.]


CONCLUSION:
If you have the misfortune of a bad OSX update, the first thing to try log into a different user account if they are available, to verify that the account isn't corrupted, rather than the whole OSX install.

For more details:
http://apple.stackexchange.com/questions/103261/what-solutions-exist-to-rectify-a-corrupt-user-account-in-os-x


UPDATE:
Following these steps http://support.apple.com/kb/ht1428 to change my corrupted account name (in an attempt of starting over but still having access to old files), just changing the account name seems to have put that account back into a runnable state.  I'll have to run it for a while before I'm confident that's really the case.

Sunday, February 16, 2014

Refactoring is awesome

Today my goal is to read the whole C# book (Programming C# 4.0), in skimming fashion.  I'll miss a lot - but occasionally little nuggets have a way of jumping out.

One of those nuggets is refactoring.

So... sometimes I'm a coding slob.  It's really very embarrassing.  It almost feels like the stack I have in my brain is full, and so I'm not willing to add yet another layer of function calls into the method I'm currently writing, so I wind up with code that looks like this:

void frankslongmethod()
{
    doAction1a();
    doAction1b();
    doAction1c();

    doAction2a();
    doAction2b();
    doAction2c();
}

And you can easily imagine this growing into a hideous monstrosity very quickly.

*** It turns out that Visual Studio and Monodevelop have automated creating functions (and this step is called "refactoring").  You can conveniently select lines of code and right click "refactor" and it will bring up a dialog that lets you name the new function it will generate for you. ***


Afterwards, the code can look like this (depending on what code I selected and how I chose to name it):

void frankslongmethod()
{
   doAction1();

   doAction2();
}

void doAction1()
{
    doAction1a();
    doAction1b();
    doAction1c();
}

void doAction2()
{
    doAction2a();
    doAction2b();
    doAction2c();
}

It's also very slick - the automation process can figure out what arguments need to be passed to the generated functions.  Well worth trying out for yourself.

The beauty of it - my brain stack didn't need to be enlarged.  I just have to take this extra step of cleaning up my code with a refactoring step after I've finished my "write this algorithm as fast as you can" phase.

[And of course we can discuss this is taking an extra call - but that might even be handled by the compiler at run-time, so no worries for now.  If you're hitting that wall, it's a problem that can be addressed - the more serious problem is reducing cognitive load on new programmers trying to understand what the code does.]