Showing posts with label memory. Show all posts
Showing posts with label memory. Show all posts

07 October, 2015

The Volatile Security of Volatile Memory

I forgot about yesterday...
It is the black box in every system, even our brain.  Volatile memory goes by many names, working memory, temporary memory, RAM, even just memory.  Whatever your preference, when you mention it you're most likely referring to the area of a system in which data is stored for a relatively short period of time until it is used and then discarded (or transferred to persistent storage).  One cannot possibly imagine a system without some form of memory; even if it is the same are where it is stored permanently, there is still some area used for temporary calculations.

Among the major differences between RAM and persistent storage is that RAM typically contains data about the processes that are currently in execution along with the data we are working on right now and will be discarded soon (yes I hear your screams, persistent storage does that too, but it also has data that we haven't looked at for months).  Along with this fact, hard disks enjoy the possibility of being totally encrypted.  They cannot be read unless the key is provided.  This is not possible in RAM, primarily because the CPU cannot work with encrypted commands.

I do not mean that the CPU is not able to process encrypted data and convert it to plain text, what I am referring to is the inability of the CPU to understand encrypted commands (opcodes) or work on the encrypted data as data rather than a decryption payload.  Let's say we have the binary value of 13 = 1101 and we want to add that to 5=101.  Our simple XOR encrypter will give us the values 0111 and 000 for the keys 1010 and 101 respectively.  Adding 0111 and 000 does not give the actual result for 18=10010.  The values have to be in plain text before actual processing.  XOR is simple and integral to CPUs so it is the simplest operation for it to decrypt the values.  Once decrypted it is then possible to add the values.

But here is the problem - where is the key stored?  Of course, working memory.  What is the point of encrypting the data in RAM when the key is in the same RAM?  What is the point of encrypting RAM after all?

Boom!


We encrypt disks because they can be removed or because they are portable, yet still contain data, unlike RAM which hold it until we turn off the system (or a bit longer if you're into memory freezing and forensics).  So, we think, RAM is inaccessible to would-be hackers.  Or so we used to think.

Recent research by various people and organisations (Sophos, Brian Krebs, Volatility Labs, among others) have identified a simple and small malware that simply looks up processes, maps their memory regions, copy paste and onto the attackers server for them to enjoy.  And by the way, the kind of data was not you're ex's text messages, but the PIN to you credit card, so it's a bit more expensive I would say.

Use only for great dinners.

I did my own research (and eventually BSc. thesis) on this subject, and it is quite scary knowing that the very heart of your system may be so easily compromised.  What's worse is that when you enter your PIN into any other system on which you have no control...God knows what's running on them and where your data goes.  Anti viruses barely have an idea how to capture such an attack, and neither do firewalls, internet protection or whatever you have.  If they did, they would block your debugger too, because that's how it works - like a debugger.  It's like a kitchen knife used for a murder - you cannot ban knives.




Here's a short and sweet step-by-step on how you can scrape your memory.  It's not intended to attack anyone, and it wouldn't be easy any way.  It's successful only if your target cannot protect their networks and you manage to get in.  The sample was done on Linux;  Windows would be totally different but still very possible (the Target attacks were in fact on Windows).  So here it goes:

A dummy little program was written in C.  All it did was store a username and password (entered using getpass() for increased security) along with a series of credit card numbers that are "swiped" into the system.

insertion.png
Swiping cards
We then find the PID of this running process just by ps aux | grep scrape (the program is named scrape, but it may be something like POSSwiper for example)

get pid.png
Getting the PID

Now we can get all the memory regions and maps used by our processes.  The /proc directory gives us a hand there.

dump maps.png
/proccing to analyse the memory

We are interested in the heap space of our program which shows up nicely in the fourth line;  ranging from address 009580000 to 00979000 (both hex).  Next thing we do is fire up the actual scraper (which is, in our case, a kitchen knife.  A legitimate gdb debugger).

gdb dump.png
Dumping memory in just one line!

GDB shows a bunch of text; we're only interested in how we started it (gdb -pid <PID>) and how we stole the memory (dump memory <to where> 0x958000 0x979000)  As you can see, using the exact heap space memory range we got from /proc.  The memory will be dumped to the file we choose.  Of course, this requires administrator rights, but as one might expect, tens and hundreds of POS devices will most likely share the same password, and will probably have the default one too (such a typical case of a security breach - I found the password to my ISP's router on a public forum...).

Now, onto the next step - the analysis, if you call it that.  Data dumped into the file is from RAM, so as expected it is binary.  Linux simplifies this analysis by providing another tools - strings.  All it does is see what's in a file and spit our all the strings it could find.  That's it, so we pass the dump to it and we get a nice list of string, including the password (you didn't see it in the first screenshot because of getpass()) and all the numbers and everything.

acquire strings.png
The gold mine

That is all.  Now go and whitelist the list of processes on your system, before someone gets to scrape the memory off it.

04 October, 2015

Linux, Virtualisation and some Performance monitoring

P.S.  This post is more of a noob's voyage towards better virtualisation on Linux than some professional guidance.
 
A few days ago I was facing an issue on my Linux machine where performance suddenly dropped to near unusability while the hard disk LED was on overdrive.  My first thought was that there may be some excessive swapping going on.  The problem was, though, how to identify what was causing this rather than what was happening.

Cheesy image for computers!


I could have guessed what the reason was since I had just turned on maybe the 10th VM on VMWare workstation.  Despite this fact it was not immediately obvious which VM might be swapping rapidly or why it was doing so (there shouldn't be much memory usage during startup).

As yet, I haven't totally found out what it was but messing with some config files did the trick up to a certain point.  First of all I limited VMWare to 16GB or RAM (out of 32) and configured it to swap as much as possible.  I was led to believe that VMWare's and the kernel's swapping mechanisms weren't on the same terms, which ended up with me bashing (excuse the pun) the whole system.  A few miraculous key presses took (Ctrl Alt F1) me to the terminal from where I could, at least, get a list of CPU heavy processes and kill them.  Unfortunately it was not just vmware-vmx processes but also kswapd0 - an integral part of the system which won't easily allow you to kill -9 it.  So basically this was the first indication of a memory issue.

After some googling I reconfigured swapping etc.  but I wasn't able to replicate the issue and quite frankly I really did not want to spend 15 everytime to recover my system.  So the process of finding a solution took days - not continuously trying to fix it of course.  the best solution I could come up with was buying a small 50GB SSD and using it all for swapping.  Apart from that I also set the vm.swappiness to a nice 100.  The memory configuration on VMWare was set to swap as much as possible too.  My idea was to allow everything to swap as much as they can since the disk was much faster now.  Apart from that, I'd have as little occupied memory as possible.

I thought I'd start seeing a lot of fast swapping this time, so in case I got into the same situation again, it would be much easier to recover.  In fact it did happen once again, but this time the system was under much more stress, so the extra swapping did help.  This time I had a little script prepared so the 10 second long keypresses would not waste much of my time when typing in all the arguments.  I used the following script to see what was hogging the CPU, network, disks - almost every possible bottleneck I could think of:

#!/bin/bash
dstat -cdnpmgs --top-bio --top-cpu --top-mem

Short and sweet, just calling dstat with canned arguments!  Calling jtop is a lot shorter than all those arguments, that's for sure.  Again, the result was swapping a swapping issue. 

dstat however showed me something I was not really expecting.  RAM usage wasn't really that bad, actually, just by looking at numbers it was great - less than 50% usage.  However there were some more numbers and at that point I was not sure if I was actually using ~40% or 97%.

Reading up on Linux memory management taught me another thing.  Linux is actually making use of much more RAM, however the bulk of it is caching.  This cache is cleared when more memory usage is required by processes.  Effectively I would see that there is less than 2-3% free RAM but that is not the correct way to read it.  So there is some silver lining to this issue - I got to learn quite a lot more on memory management on Linux.

Following this result I started looking for a virtualisation solution that did not try to re-implement what the kernel was built to do.  Not that I have anything in particular against VMWare or how it is implemented, but I was quite sure that the problem was originating from it.  After a bit more educated reading on virtualisation, and a bit more courage to move out my (then) GUI-based comfort zone (few weeks before the said case I was mostly a Windows user..), I came to the conclusion that the Linux-based systems were potentially much better.


The logo is cool though
Here I introduced myself to KVM and Xen.  Both appear to be more ingrained into the system and had potentially better memory management.  I read up on some general performance and history of both systems and KVM appeared to have the upper hand.  Being a more integral part of the Linux eco-system (and marginally faster https://major.io/2014/06/22/performance-benchmarks-kvm-vs-xen/) I opted to base my future VMs on KVM.  I'm happy to say that I've never looked back since then and the impressive performance I enjoy on KVM is (on my system) unparalleled.

I'll let the kernel manage my memory
There is no particular conclusion here, except that maybe you should be familiar with your options before making decisions.  I've got nothing against VMWare, as I said, I simply found something that works better for me.  Management tools are far better on the VMWare side, but I'm satisfied with what VM Manager offers in terms of management and monitoring.  Oh, you may also make use of the "script" I have.  It's convenient when you need to see some performance details while not keying in some 5 arguments.  I might write something on KVM next time, since it allows one to define many more options rather than a few clicks and done.



14 December, 2011

Caching : The unsung hero of performance

It's not just abstract numbers after all :)
Many people tend to forget, or worse, ignore the fact that caching plays a vital role in many applications.  Why would they place a cache in the CPU for example?

This idea might come from the fact that in an age dominated by the web and fast internet access, some might think that cached pages are unnecessary.  True, at times you end up with a lot of space taken up by pages you visited only once.

But caching is more than just a copy of a page on your hard drive (or SD Card).  In this post I shall demonstrate how a terribly simple implementation of a cache in a Fibonacci algorithm will have a massive impact on performance.  Obviously one might not find many uses for Fibonacci numbers, but the idea can be used in many areas, such as crawlers, searching and environments where some value which is unknown but is somewhat constant is required frequently.

Many know what Fibonacci numbers are, so I won't be going into a lot of details about their algorithm, but don't worry though, it's very easy to implement it Java.  This time we'll be writing a single class, having only 3 methods.


import java.math.BigInteger;
import java.util.HashMap;


public class Fibonacci 
{
private static HashMap<Integer, BigInteger> cache = new HashMap<Integer, BigInteger>();

public static void main(String[] args) 
{
int v = 35;
if(args.length>0)
v = Integer.valueOf(args[0]).intValue();

System.out.println("Fibonacci efficiency tester for Fib("+v+").\n\n");

long n = System.currentTimeMillis();
System.out.println("Cached: "+fib(v, true));
long mst = (System.currentTimeMillis() - n);
System.out.println(getTimeFromMs(mst/1000));

n = System.currentTimeMillis();
System.out.println("Non-Cached: "+fib(v, false));
mst = (System.currentTimeMillis() - n);
System.out.println(getTimeFromMs(mst/1000));
}

Not much here.  Just declaring some variables and a main class.  The main class starts off by declaring the number of iterations we plan on doing.  In this case variable v takes care of that.  Don't set it too high, otherwise you might end up waiting 5 minutes until you get a result!  Next we check if we have any arguments;  if so, we assume that it is the number of iterations to perform, so we set v as that value.

Then we just start displaying the result messages.  As you can see we are measuring the time it takes for cached and non cached calculations.  I know I have created a benchmarking tool, but it's OK to use the normal system time here.

Now it's time to code the real Fibonacci method.


private static BigInteger fib(int f, boolean cached)
{
if(f<2) return new BigInteger(""+f);


if(cached)
{
if(!cache.containsKey(new Integer(f)))
{
BigInteger v  = (fib(f-2, cached)).add(fib(f-1, cached));


cache.put(new Integer(f), v);
}
else
{
return cache.get(new Integer(f));
}
}

BigInteger n1 = fib(f - 2, cached);
BigInteger n2 = fib(f - 1, cached);
return n1.add(n2);
}


What we are doing here is recursively calling the same method.  It is much cleaner than a loop, and anyway, a loop does not always suffice, such as in cases of crawling.  Before performing anything complicated, we check if the number is high enough to be calculated.  So if we have a 1 or 0, there's nothing much to do, so just return it.  Otherwise, we will perform the normal calculation.

We check if the caching is enabled, as this is all about caching after all, an then the calculation is performed.  So if we have caching enabled, we will first check if the cache contains the Fibonacci of the number we are currently analysing, and if it does, we are done, and return it.  Otherwise we will calculate it and cache it.  If caching is not enabled, the value is calculated every time.

We then write the usual method which cleanly shows the value in a more humane way :)


public static String getTimeFromMs(long ms)
{
if (ms < 1000)
return ms + "ms";
else if (ms >= 1000 && ms < 60000)
return (ms/1000) + "s " + getTimeFromMs(ms - ((ms/1000)*1000));
else if (ms >=60000 && ms < 3600000)
return (ms/60000) + "m " + getTimeFromMs(ms - ((ms/60000)*60000));
else return (ms/3600000) + "h " + getTimeFromMs(ms - ((ms/3600000)*3600000));
}


That is all basically.  You can now run this program and see for yourself the improvement which is gained through caching.  I have provided my results below;  naturally yours will have some variations, but there definitely will be a huge difference between the cached and non-cached runs.



Fibonacci efficiency tester for Fib(35).


Cached: 9227465
in 2ms
Non-Cached: 9227465
in 3s


Make no mistake. There is an 'm' in the cached, while there is not in the non-cached. That means we had a whole 3 second difference. Now, hopefully, you shall be writing more sensible code and freeing the CPU from eccessive and unnecessary work :)

The full code is available here.

13 December, 2011

Building Brainf**k Interpreter in Java

Excuse the title, but really, there is a (esoteric) programming language named just like that; "Brainf**k" (without  censorship).

So what is this BF language?  Basically from a coding point of view, it is a simple language made up of 8 commands.  That's right, eight.  Since technically all a CPU does is manipulate a value in a memory location and does some IO with it, the language can to a certain extent, do everything - a.k.a, Turing Complete.  OK theoretically I can fly if I attach a pair of wings to my arms, but it's not practical, just like BF.  Therefore I am in no way saying that it will practically do tasks; just theoretically, it can.


That really is the code.  Now you
know why it has got that name.
This language was designed by Urban Müller in 1993, just for fun.  It was not meant to be utilised and there are no real definitions, as numerous compilers and/or interpreters were built with some additional features.  Anyway, if you are interested in the language itself, rather than the interpreter, head to Wikipedia, you know it helps :P


Just some more details about the structure, and we can start.  Now, as I said, the CPU basically works by manipulating data in memory and performing some sort of IO, and BF does just that.  We have an array of memory cells, and a pointer.  Two commands move the pointer, another two increase or decrease the value of the current cell (the one we are pointing to), the next two print out the value or take in a value while the last pair is used for looping.


(btw, got this table from Wikipedia)
CharacterMeaning
>increment the data pointer (to point to the next cell to the right).
<decrement the data pointer (to point to the next cell to the left).
+increment (increase by one) the byte at the data pointer.
-decrement (decrease by one) the byte at the data pointer.
.output a character, the ASCII value of which being the byte at the data pointer.
,accept one byte of input, storing its value in the byte at the data pointer.
[if the byte at the data pointer is zero, then instead of moving the instruction pointer forward to the next command, jump it forward to the command after the matching] command*.
]if the byte at the data pointer is nonzero, then instead of moving the instruction pointer forward to the next command, jump it back to the command after thematching [ command*.

That sums up the features we need to support.  Next thing is to start with the basics.  We will need about 4 classes or so, not much.  Firstly we shall create the building blocks, and finally joined them using some main class.

So, the "building blocks" consist of a Cell, an Interpreter, a Utilities class, and the main class.
  • Cell : The cell is the core of the system.  We'll have a lot of these, possibly as many as the RAM can hold.  This class will be the simplest thing you'll ever code though;  it should only hold it's value, and have some quick methods such as incrementValue and decrementValue.
  • Interpreter : The language consists of a simple stream of commands, so all we need is to move one character at a time and interpret it's meaning.  Therefore what we'll do is build a class which should preferably take in a stream of characters (not an InputStream) and just work it's way to the end.  We'll be having a switch with some method calls that do the work.
  • Utilities : You might need a utilities class if, like me, you prefer to have static methods in one place.
  • Main : The main class is the, well , main entry point.  Should be called first and will display some nice messages insulting your brain.
Now, to the code.

Cell


package com.jf.bfc.objects;


public class Cell
{
private int value;

public Cell()
{
value = 0;
}

public void inc()
{
value ++;
}

public void dec()
{
value --;
}

public int val()
{
return value;
}

public void set(char c)
{
value = c;
}
}


I hope no one got confused.  You know, this is some über complex code we're dealing with here.  Really, it's self explanatory, but if any newbies are around, offering help is not a problem; just leave a comment or something.  One thing to note is that we are using "int".  This is preferable as it is easier to convert to ASCII and back again.  You might want to use long and UTF maybe, that's not the idea of this post.

Interpreter



package com.jf.bfc;

import java.util.ArrayList;
import java.util.Scanner;

import com.jf.bfc.objects.Cell;

public class Interpreter
{
private String program;
private ArrayList<Cell> memory;
private int memoryPointer;
private int progPointer;

public Interpreter(String program)
{
this.program = program;
this.memoryPointer = 0;
this.memory = new ArrayList<Cell>();
this.memory.add(new Cell()); //create an initial cell.
this.progPointer = 0;
}

public void interpret()
{
try
{
char [] progArr = program.toCharArray();
for (progPointer = 0; progPointer < progArr.length; progPointer ++)
{
switch(progArr[progPointer])
{
case '>' : pointerUp(); break;
case '<' : pointerDown(); break;
case '+' : cellValueUp(); break;
case '-' : cellValueDown(); break;
case '.' : out(); break;
case ',' : in(); break;
case '[' : subroutine(); break;
default : break;
}
}
}
catch (Exception e)
{
System.err.println("Interpretation failed at command " + progPointer);
System.err.println(e);
}
}


Now this is a bit more important.  Here we are setting up some variables and declaring the Interpreter class, along with the main "interpret" method.
  • Program is the String which contains any number of the 8 commands, i.e. the program stream.
  • Memory is an ArrayList of Cell objects which technically make up our memory.
  • MemoryPointer is pointing to the cell we are currently working with.
  • ProgPointer points to the last command we were interpreting before jumping in a loop.  This will be explained in more detail later on.
The Constructor is simply initialising the variables.  It is better this way rather than initialising them at the same time you are declaring them.  Firstly because the code will look cleaner with declarations in one place and initialisation in a method specifically made for initialisation, and secondly, performance is increased since the JVM will simply move down the constructor.  Declaring and initialising the variables at the same time makes the JVM move out of the method just to initialise the variable, wasting time and memory.  Now this is not vital, but it's interesting to know it, especially for limited memory or processing power environments.

Let's move on.  The interpret method, as described earlier, simply moves one character at at time, reading it, parsing and executing the required command.  As you can see, it is quite straight forward; just consider the String as a character array (which it actually is) and loop through, checking it using a switch statement, and calling the corresponding method.  If the code has some problems, we will simply tell the user that he or she has actually got an f'd up program.

pointerUp


The pointer up, as it is called, moves the pointer up (or left, depending on how you imagine the memory).


private void pointerUp()
{
if (memory.size() - 2 < memoryPointer)
memory.add(new Cell());

memoryPointer ++;
}


All we do is ensure we are not going beyond our memory limit, and then adding a new empty cell in our memory.  After that, we increase the pointer.  Basically the first two lines ensure we have enough space in memory, and adding a new cell is only required if there is no cell yet initialised in that area.

pointerDown


What goes up, must come down, and that is what we do to the pointer here.   The same idea as the pointerUp, but this time we are checking the bottom (or right side) of the memory.


private void pointerDown()
{
if (memoryPointer == 0)
return;
memoryPointer --;
}


cellValueUp & cellValueDown


Wondering around in memory space won't be getting us anywhere.  So our next step is to do something with the memory we have.  Actually, with the cell we have.  We can only manage a cell at a time.  Increasing the cell value will essentially change the sense and meaning of the memory as a whole.  We could either replace a single letter and change 'hello' to 'hellp' or even changing the value of an operator code and change from 1+1 to 1-1.  So anyway, this is the code we should have.



private void cellValueUp()
{
memory.get(memoryPointer).inc();
}


There you go.  Simply ask the cell, which is in a memory location referred to by pointer, to increase its value.  The same can be done for reducing the value.  This time, we'll be asking it to decrease its value.


private void cellValueDown()
{
memory.get(memoryPointer).dec();
}


In & Out


The next two methods are quite simple too.  They either output the value of the current cell, or just take in a value and set that as the value as the current cell.

The simpler function is the output.


private void out()
{
System.out.print((char)memory.get(memoryPointer).val());
}


Again, nothing much;  simply printing out the character whose code is the numerical value in the current cell.


private void in()
{
Scanner sc = new Scanner(System.in);
memory.get(memoryPointer).set(sc.next().charAt(0));
}

The in method is just a slightly bit more complicated.  What we do is initialise a Scanner, a simple class which can be easily used to read user input.  Then we put the value in the current cell.  Keep in mind though, we are dealing with a single cell, so we cannot store a string.  Therefore we shall get the first character and save only its value in the cell.

Subroutines


Basically we have covered the essential functions of BF.  You can build something out of those commands.  Loops however are essential if you plan to write a program and don't actually intend to follow the language's name.  So here we come to possibly the most difficult part of the interpreter.  Subroutines provide a practical way to code repetitive tasks but also present a challenge when you try to code some sort of loop handling system, just like what we shall be doing next.

Let's start off with the code.  It might look intimidating, but I'll go through it, so hold on tight.


private void subroutine() throws Exception
{
int stopPoint = memoryPointer;
StringBuffer subroutine = new StringBuffer();
int brakcetsTillEnd = 0;
progPointer ++; //skip the bracket


for (int x = progPointer; x < program.length(); x++)
{
char chr = program.charAt(x);
subroutine.append(chr);
if (chr == ']')
brakcetsTillEnd --;

if (brakcetsTillEnd == -1)
break;
progPointer ++;
}

if (brakcetsTillEnd > -1)
throw new Exception("Unclosed bracket!");


String originalCode = program;
program = subroutine.toString();
int stopCommandPoint = progPointer;
while (memory.get(memoryPointer).val() != 0)
{
interpret();
memoryPointer = stopPoint;
}
progPointer = stopCommandPoint;
program = originalCode;
}


Hmm, there you go, one chunk of seemingly kernel level code.
So, we start off by keeping track of where we are.  We declare a stopPoint which holds the location of the last command before the loop (the '[' command).  Next, the subroutine code is grouped into a single StringBuffer so that we sort of have a new program.  This is done by keeping track of all opening brackets, and ensuring that we stop when the closing bracket associated with the one we started with is encountered.  We then move the program pointer by one so that we skip the square bracket and start recording the subroutine code.  We also ensure that the bracket is closed by detecting if not enough closing brackets were encountered before the end of the whole code.

An important thing to do is store the position in the main code at which the subroutine ends.  We will need that to continue execution of the program after the subroutine is done with it's job.

After that, we back up the original code and set the program code as the one in the subroutine.  Basically what we have done is switch the code we are executing with the one in the subroutine while keeping the same memory and pointer.

If you read the documentation of BF, you would know that looping stops when the value of cell selected at the end of the subroutine is equal to zero.  What we do here is constantly check the current cell, and while it is not equal to zero, we call interpret which will automatically interpret and execute the subroutine code.  This is because, as you know, we have replaced the main code with the one in the subroutine.  Also we reset the memory pointer to where it was before entering the subroutine, every time we execute it.  Once execution of the subroutine is done, we can simply restore the original code and send the pointer to the end of the subroutine.

The main interpreter can simply continue working, calling functions as they come up.  This is practically the BF interpreter.  All we need now is the Utilities and Main class which do not much work directly related to BF, but will nonetheless be listed here.

Utils



package com.jf.bfc.objects;


import java.io.FileInputStream;
import java.io.IOException;
import java.util.Scanner;


public class Utils
{
/**
* Fetch the entire contents of a text file, and return it in a String.
* @param aFile is a file which already exists and can be read.
* @throws IOException 
*/
static public String readStringFromDisk(String str_file_name) throws IOException
{
StringBuilder ds_text = new StringBuilder();
String str_sep = System.getProperty("line.separator");
Scanner ds_scan = new Scanner(new FileInputStream(str_file_name), "UTF-8");
try
{
while (ds_scan.hasNextLine())
ds_text.append(ds_scan.nextLine() + str_sep);
}
finally
{
ds_scan.close();
}


return ds_text.toString();
}
}


As you can see, all we do is read a simple file from disk and get the String value stored in it.  We use this only to read a saved BF program.

Main


The main class is just one main method which displays a simple message and asks the user if he either prefers to open a pre-written BF program, or if he should write one in the console and just execute it immediately.  There are no features such as saving or debugging.  I might one day do something like that, but I'd rather have my own or at least a more humane language hehe.

So here comes Mr.Main.


package com.jf.bfc;


import java.io.IOException;
import java.util.Scanner;


import com.jf.bfc.objects.Utils;


public class BFC
{
public static void main(String[] args) throws IOException
{
System.out.println("Brainf**k Interpreter v0.1\n");


Scanner s = new Scanner(System.in);
String path = "about";

while (path.toLowerCase().equals("about"))
{
s = new Scanner(System.in);
s.useDelimiter("\n");
System.out.println("Created by James Farrugia, within an hour, while wondering how to solve other problems.");
System.out.println("Ended up thinking about Brainf*ck Plus, which might have some sort of String and function support;");
System.out.println("but I don't know about that.  You can contact me on jamsinux _at_ gmail.com, if you like.\n");
System.out.println("If you need help about Brainf*ck as a language, I know a guy, we call him Google, very helpful guy...\n\n");
System.out.println("Enter path to your program or type 'new' to write one now.  Type 'About' for some unnecessary info...");
path = s.next().trim();
}
String bfProgram = "";

if ("new".equals(path.toLowerCase()))
{
System.out.println("The console is your playing field, you imagination is the limit.");
System.out.println("You can starting f**king you brain:\n");
s = new Scanner(System.in);
s.useDelimiter("\n");
bfProgram = s.next();
}
else
{
try
{
bfProgram = Utils.readStringFromDisk(path);
}
catch (IOException e) {
System.err.println("That file does not exist, or it might have kind of some problems...");
}
}

System.out.println(bfProgram);

Interpreter intr = new Interpreter(bfProgram);

System.out.println("Interpreting...\n-------");

try
{
intr.interpret();
}
catch (Exception e) 
{
System.err.println("You seem to have blown up the VM...");
}

System.out.println("\n-------\nInterpreting complete.");
}
}


Obviously you can have your own main method, since as you can see we simply present the interpreter to the user using this method, and the interpreter is only initialised once, and asked to interpret some BF code.

Conclusion

There you go then.  A nice little post about writing an interpreter for a useless language, but just like this same language, it's a way to do some interesting stuff and yet surprisingly challenging.  you should try coding using BF yourself to see what I'm talking about.  Also, basing on this concept, one can move on and develop some sort of basic script.  That would be a good idea.

As usual the full code is available here.