Friday, July 6, 2012

How I Would Reverse Engineer an ET Craft

If I were to reverse engineer an ET craft, I would start by modelling it in a computer. I would not waste time trying to understand it. I would push buttons, note results, make models of the structure, until my model matched what I observed. I would use scanning microscopes and put those results into my model. I would fly it and make careful observations and measurements.
I would then optimize that model. I would take this optimized result and stick it into its own library. Would I bother philosophizing about it? No, why? Would I question or say something that it did was impossible. No, why should I? Just put it into the computer.
Now, let's say my boss came to me and said, we want to use a part of this technology in another application. What would I do? I would create another computer program that would tell me how to do that. I would not hesitate, hire thousands of experts, waste billions of dollars. I would get a couple of computer programmers and a super computer.
End of story. Reality is reality. We don't know all there is to know and why should we care to build some abstraction of something anyway? Results are all we were ever after, not these mind games.
Now we know why the black projects go so slow and are so expensive. The abstraction-junkies are on a rampage...

Thursday, July 5, 2012

All Physics Theories Are Illusions

Honestly, I don't like the name string theory at all. Why? Because it implies a basic starting point, a string. From the very start this limits the possibilities of what this theory can give us.
I would rather it be called The Unified Physics Model. I would rather that people concentrate on the idea of unification than on any particular limited human concept. Why? Because any theory or model is really just an abstraction and will allways be an abstraction, so let's not limit ourselves nor beat around the bush and confuse the layman.
For example, H is the symbol for Hydrogen, but Hydrogen is really an electron orbiting a Nucleus. But an Electron is really a charged particle with negative charge and spin. And negative charge means it has a negative electric field and attracts to positive fields and repels from other negative fields.
But the promise of a unified theory is not bunk at all; in fact it is required. In fact, it is logical. The naming we put on things, the labelling, that is the problem. But we don't need to know what the theory even is, nor how it really even works.
"Oh come on, are you nuts?" you might ask. "What are you really trying to say?"
I am saying point blank, that our very outdated methods of science itself are preventing us from making further progress. They are getting us lost, in names, conventions and abstractions all in a vain attempt to put a perfect set of equations down on the blackboard. How many years must someone study before they get a working model of all these abstractions? Honestly, there are not enough years in the human life to understand every area of physics and chemistry.
This begs the question,why then are we trying to master these names, abstractions and math? Why are we continuing to do something that a computer could do better?
And this is the crux of my argument. The computer only sees bits and bytes. Likewise, why should we see any more than this? Why do we need to know anything more than: The computer models the reality, what more do I need to know?
I propose that science is becomming quickly oudated. We focus on the mechanism and finding a better way to express it abstractly and in this process lose sight of what our true goal all along should be: RESULTS.
I propose that a unified theory, in reality is not necessary, and in fact, a total destraction. We should immediately stop looking for it! Likewise, we should drop all labels for anything. I will call it "reality." I apply no theories, just pure math and feed it into my computer. And use as many dimensions as you deam useful!
"But you need some model, some abstraction from which to base your math" you may further argue.
True, you need some starting point, but I argue it is not important what you pick as long as it works; as long as it predicts a correct result. And you should change it in a moment if some other model proves more efficient.
For example, I wrote an ephemeris years ago, which was accurate to a minute of arc. My program applied standard Keplar Dynamics to approximate the location of the planet. It then used a fourier transform to simulate the perturbations and get a more accurate result.
But let's be honest, why did I even bother modelling the Keplar Dynamics? I could have jumped to the fourier transform and be done with it. Ok, sometimes it was useful to start with a simplification, but I know a competitor who had an accuracy of one second of arc and only used fourier transforms based off of a much more accurate ephemeris.
So, if I want to build a rocket, why do I even need to know or care what the name of the materials are or even start to wonder what shape I should start with? Is it really important? Why should I guess, when a computer can do it all more accurately?
We are getting lost in the abstractions of reality and losing sight of optimal solutions! Our drive for perfection should be in the results, not in the abstraction.
Personally, I would rather tell the computer, "I need to go to the Moon and return in safety, now tell me what I need to do." The computer would find the most optimal way to get to the Moon and print out a list of optimal instructions.
This is the future. It is not a dumbing down of science. It is facing of reality, finally, and getting our egos out of the way.

Saturday, May 5, 2012

Mice suck

Everyone is completely out of their minds.  Why?  The Mouse is why.  Mice are totally a fad which has long outworn its welcome.  In the future, we won't point and click, we will speak, type, gesture and think our way to our goal.  Our computers will attempt to guess more what we want.  This is what we should be focusing our efforts on.  Not yet another GUI.   I remember how the Mac made the mouse popular back in the 80's.  We were so wowed by the amazing graphics of that monochrome 10 inch CRT.

I was so much faster without a mouse, in the text-only days.  I remember working with CAD without a mouse and it was so much easier.  Everything was faster and better without a mouse. 

The world is totally crazy with how much time they spend on the GUI, the visual appearance of something.

I am not against graphics when graphics are required.  But until that point, I much prefer a minimalistic approach.  Everything should be commands with command completion and short-cuts and aliases... etc...  I should be able, for example, using speech recognition, to speak to the program I am using.

When WYSIWYG is required, such as a spreadsheet, there should be a standard way of navigating, from cell to cell, etc..  Granted, most Windows programs have this, but its not always clear what it is and it's not always enforced.

I sometimes think, for example, how much easier it would be if I could edit my video using text, instead of clicking around.  I am slowly migrating my entire OS environment BACK to text.

Wednesday, May 2, 2012

Why Java?

Is Java really relevant? After perusing the class library from Java and comparing the language to C++, I came to ask myself: Why Java? On the bad side, it runs slower than C++. It does however tout a more thorough framework than STL, but then again, there are plenty of frameworks for C++ which include smart pointers and garbage collectors as well as every imaginable class and function. In fact, every argument in favor of Java, I can make the same or better argument for C++. The only one in which Java shines is with mobile phones, and devices with small footprints, which use many different tiny applications. In this case, portablitity and size win out.

Where both languages seem to fail is with functional programming and "terseness." They both don't handle implicit types very well, as do the "Caml-like" languages. Both make for a lot of unnecessary keyboard strokes. Java however is much less terse even than C++. In fact, it is the least terse language out there.

The truth is that C++ could be used in almost every place Java is used. Performance and portability is usually much better with C++.

What I don't understand, is why was Java invented at all? At the time, compiling to bytecode probably seemed like a good idea. You could squeeze a few more bytes out of your project. It was argued that it could run everywhere, just make the VM...

The C languages, in general, now-a-days are considered quite verbose. O-Caml would be a wonderful replacement for C++, if however, the garbage collecter was optional. C# makes use of a ValueType, which until recently was not possible in Java. This allows for direct passing of objects, bypassing the heap.

In summation, why I avoid Java is threefold:
1. I haven't found a situation where Java did better what I could do in C++.
2. Java is too verbose.
3. Java interfaces terribly with the operating system.

In general, I prefer to develop applications in C++ on top of a portable framework, like Milan. And then any OS specific calls, I can wrap with #ifdefs. This way I am guarenteed that my application will perform exactly as suspected, without any performaces surprises, which prove impossible to tweak.

If everyone developed with Milan and C++, we would never have the case of the buggy and slow application, which runs well on one machine and horribly on another.

Monday, April 9, 2012

Function size counting

Someone asked me recently how to count the size of a function in c++;

I use function size counting all the time and it has lots and lots of uses. Is it reliable? No way. Is it standard c++? No way. But that's why you need to check it in the disassembler to make sure it worked, every time that you release a new version. Compiler flags can mess up the ordering.

static void funcIwantToCount()
{
// do stuff
}
static void funcToDelimitMyOtherFunc()
{
__asm _emit 0xCC
__asm _emit 0xCC
__asm _emit 0xCC
__asm _emit 0xCC
}

int getlength( void *funcaddress )
{
int length = 0;
for(length = 0; *((UINT32 *)(&((unsigned char *)funcaddress)[length])) != 0xCCCCCCCC; ++length);
return length;
}

It seems to work better with static functions. Global optimizations can kill it.

P.S. I hate people, asking why you want to do this and it's impossible, etc. Stop asking these questions, please. Makes you sound stupid. Programmers are often asked to do non-standard things, because new products almost always push the limits of what's availble. If they don't, your product is probably a rehash of what's already been done. Boring!!!

Saturday, March 24, 2012

The overhead of std::string

Recently, I have been interested in the overhead of standard string and decided to do an investigation using Visual Studio 2010. I optimized my code for size and wrote a few different scenario functions. Firstly I noted the size of the particular std::string I am using is 20 bytes. That seems quite high. So, I turned off run-time type checking and noted the size was still 20 bytes.

OK. What next? I decided to write several functions which return a string in various ways. I know that even though this is in release build with all size optizations turned to maximum, that function alignment might be off, so I also decided in the debugger to take a look at the code. I noticed that all the functions were packed close together without any nops in between functions .

First, I will show you just the sizes of the various functions:
  1. const char *ReturnAConstCharString() { return "test"; } =6 bytes
  2. Sizeof(String ReturnAString()) { return String("test"); } = 22 bytes
  3. Sizeof(void FillingAStringReference(String &reference)) { reference = "test"; } = 30 bytes
  4. Sizeof(auto_ptr ReturnAutoPtr() { return auto_ptr(new String(""")); } = 43 bytes.

Wow, not what I expected at all. The one that most people think is the most optimized solution (using a reference), apparently takes more space than just returning a string. Naturally I would have expected the const char * version to be the lightest and 6 bytes is extremely light. However one can't do much with such a function, which is the same effect as referencing a static variable.

Looking at the underlying code, only (4) had a loop. So, the auto_ptr, performance wise, would probably be the poorest.

For most practical situations, the best solution is (2). Simply return the String. I wonder if this is always true for all classes? Probably not for the bigger classes however.

The next thing I wanted to investigate is the return load of each call. I mean, are the all the same weight or do they come with an overhead? The requirement of my return overhead functions is that the all return their values into a std::string. I also decided just to count instructions in the debugger, as there was really no other good way of doing it.

  1. sizeof(string value = ReturnAConstCharString()) = 17 bytes and 7 instructions (2 calls).
  2. sizeof(string value = ReturnAString()) = 43 bytes and 14 instructions (3 calls).
  3. sizeof(string value = FillingAStringReference) = 9 bytes and 4 instructions (1 call).
  4. sizeof(auto value = new auto_ptr(ReturnAutoPtr())) = 20 bytes and 8 instructions (2 calls).

Now, this starts to paint a more clear picture as the overhead of each method. The truth is that passing a string by reference takes the least overhead at least when being called. So, when a function is referenced frequently, this can save a lot of space when a reference is returned.

  1. Const char returning: 6 + 17 = 23 bytes.
  2. Returning a string: 22 + 43 = 55 bytes.
  3. Fill a reference: 30 + 9 = 39 bytes.
  4. auto_ptr: 43 + 20 = 63 bytes.

Ok, so, our original assumptions are starting to prove correct. References seems to be outpacing returning a string. This is probably true in terms of performance as well. But what about practically, in a program, which calls each function 5 times? Five seems like a good number for a small program.

  1. Const char returning: 6 + 5*17 = 91 bytes. 10 calls.
  2. Returning a string: 22 + 5*43 = 237 bytes. 15 calls.
  3. Fill a reference: 30 + 5*9 = 75 bytes. 5 calls.
  4. auto_ptr: 43 + 5*20 = 143 bytes. 10 calls.

Slightly larger programs probably end up calling functions which returns string at least 100 times, but probably contain up to 30 different functions. What would that look like?

  1. Const char returning: 30*6 + 100*17 = 180 + 17=1876 bytes. 230 calls.
  2. Returning a string: 30*22 + 100*43 = 660+4300= 4960 bytes. 330 calls.
  3. Fill a reference: 30*30 + 100*9 = 900 + 930 bytes = 1830. 130 calls.
  4. auto_ptr: 30*43 + 100*20 = 1290+2000 = 3290 bytes. 230 calls.

Still I find the idea of returning a String so much easier than a reference and frankly adding 3K extra, for this convenience is, in my mind acceptable. However, many programmers may feel that 3K is too much, or the overhead may be too great.

The main reason, why, is that to return a string, actually requires far less typing than all the above options and just for that reason alone, I usually pick this. When speed becomes an issue, I slip back to using references. I have used auto_ptrs in the past, but my feeling is that auto_ptrs are more appropriate for larger classes which exceed 30 bytes. Also not mentioned here is the overhead of allocating memory on the heap. Each call to new is much more costly that to use a stack variable.

Just remember the old adage, "premature optimization is the root of all evil", and you will be just fine.


Wednesday, March 21, 2012

Fast Unique File Name Generation

A colleague and I recently got into a discussion about generating unique file names, for temporary files. We discussed the different ways of doing this, using GUIDs or Windows GetTempFileName() function and other options.

I started writing file system drivers over 20 years ago, so I have watched a lot of stack traces down the file system stack. Generating a guid is much much faster, since it requires far less overhead than searching for a unique file name. GetTempFileName actually creates a file, which means it has to call through the entire file system driver stack (who knows how many calls that would be and a switch into kernel mode.) GetRandomFileName sounds like it is faster, however I would trust the GUID call to be even faster. What people don't realize is that even testing for the existence of a file requires a complete call through the driver stack. It actually results in an open, get attributes and close (which is at least 3 calls, depending on the level.) In reality it is a minimum of 20 function calls and a transition to kernel mode. GUIDS guarentee of uniqueness is good enough for most purposes.

My recommendation was to generate the name and create the file only if it doesn't exist. If it does, throw an exception and catch it, then generate a new guid and try again. That way, you have zero chance of errors and can sleep easy at night.

On a side note, checking for errors is so overdone. Code should be designed to crash if assumptions are wrong, or catch exceptions and deal with it then. It's much faster to push and pop an address on the exception stack, than to check everytime on every function for an error.

Creating Temporary Files

Recently a collegue and I got into a discussion over generating temporay file names. Several options were discussed including using system functions, etc.. On primary focus was on speed, since we had to process several million files.

Generating a guid is much much faster, since it requires far less overhead than searching for a unique file name. GetTempFileName actually creates a file, which means it has to call through the entire file system driver stack (who knows how many calls that would be and a switch into kernel mode.) GetRandomFileName sounds like it is faster, however I would trust the GUID call to be even faster. What people don't realize is that even testing for the existence of a file requires a complete call through the driver stack. It actually results in an open, get attributes and close (which is at least 3 calls, depending on the level.) In reality it is a minimum of 20 function calls and a transition to kernel mode. GUIDS guarentee of uniqueness is good enough for most purposes.

My recommendation is to generate the name and create the file only if it doesn't exist. If it does, throw an exception and catch it, then generate a new guid and try again. That way, you have zero chance of errors and can rest easy at night.

On a side note, checking for errors is so overdone. Code should be designed to crash if assumptions are wrong, or catch exceptions and deal with it then. Its much faster to push and pop and address on the exception stack, than to check everytime on every function for an error.

Saturday, March 10, 2012

The Theory of Everything, Anti-Gravity and Free Energy

Someone asked me if "free-energy" is possible. I suppose free, should really mean: "fueless energy," because no device is free to build and will required maintenance. I don't really know how to build a practical device which does this, but I can explain the theory of how it might be done. With that we need to explain how everything really works, at a micro and macroscopic level.

So, what is happening is that energy is constantly radiating from all mass, as it loses energy (in the form of scalar waves, like rippling waves in a pond.) Everything in the universe is "losing energy" at a constant rate, due to its rotation. The energy trapped in the atom is radiating out and becoming space-time and why the universe is always expanding. It's not really expanding, its just that matter is shrinking. The big bang is really a misnomer. There was no bang, just this evolution. Particles are points of singularity, where the field goes outside the space-time contiuum. EM and Flavor are sub-oscillations in the extra dimensions of the sub-atomic particles. Space is where there are no-singularties, but only warps in the different dimensions (11 of them), and these warps are the field strengths as well as the energy contained therein. So, space holds energy and when it has enough, sometimes, it flashes over into a singularity (after the minimum threshold is reached.) and this is where anti-matter comes from inside of a particle accelerator. The amount of energy in a region of space, is the amount of distance the space contains, so most things keep the same relative distance to each other. If the energy within a region of space goes to zero, so does the distance (and this is called a wormhole.)

Things fall to the ground because there is a very slight shielding of energy by the Earth. This is overcome several ways: The simple way is to use a rocket engine, which applies a push or to use a wing, which causes lift. But these ways cause acceleration to be felt by the recipients and thus, the time field is altered, which makes for relativistic effects, such as the ever annoying time-dilation effect.

So, to make anti-gravity, you have to balance the energy coming from the Earth, from that coming from the cosmos. Since these are scalar waves, at high frequencies, they are difficult to shield (or even detect.) The idea is to utilize the non-linearities of the particle fields themselves, i.e. the nucleus, and perform a reverse phase-conjugation, i.e. pumping energy from one source (such as a spinning disk or fluid) into the nucleus. The military found a way to use a rotating super-fluid, and then they stuck a triangle on it (with silent rockets), and then they flew it around and people saw it, including my cousin (40 feet over his head in tehachipi.)

Ok, so free energy is therefore taking the non-homogenuity of this ZPE field (the radiating-rippling spherical lake-like waves) and transferring them into a new force. How this would be done, I am not really sure. Harold Puthof did it with rotating electrons bursts, but his device resulted in very little excess energy. Ok, so now people claim this guy, Howard Johnson, seems to have done it with magnets. It might be possible, just really don't know exactly how. All the devices I saw were deceptions, sometimes self-deceptions by their inventors. Perhaps Howard Johnson actually figured it out.

What I need is a model, first, on my computer. I have made an open-source project called video-physics on Google and with that I hope to finish the modeling of the ZPE-field. Please feel free to help.

Wednesday, March 7, 2012

Get SqlLite to work with .NET 4.0 and VS 2010

Of extreme use is SqlLite under .NET 4.0 and Visual Studio 2010. I am writing this short article to help those in need, get quickly up and running. This example is made with a 64 bit version of Windows 7. I made sure to activate all of the features, so that linq can be used within Visual Studio. Here is a list the process from beginning to end:
  1. First step is to download ADO.NET 2.0 Provider for SQLite here.
  2. Run the file SQLite-1.0.66.0-setup.exe.
  3. Let it install into the default location.
  4. Make sure to check the box allowing the installer to change Visual Studio 2010.
  5. Check to make sure it installed correctly, by starting Visual Studio and opening the Server-Explorer pane, then connecting to a Data Source:














  6. Close all instances of Visual Studio.
  7. Download x86 and x64 versions of the system.data.sqlite dlls. For our purposes, sqlite-netFx40-binary-Win32-2010-1.0.79.0.zip and sqlite-netFx40-binary-x64-2010-1.0.79.0.zip.
  8. Expand each zip into a corresponding temporary directory and label them x86 and x64 to differentiate between them.
  9. Do NOT execute the installer, but rather copy the x86 files to C:\program files (x86)\SQLite.NET\bin and the x64 files to C:\program files (x86)\SQLite.NET\bin\x64.
  10. The program SQLite-1.0.66.0-setup.exe, when installed, registered the incorrect files into the gac. You must point instead to the files you just copied. Open the Visual Studio command prompt in admin mode.
  11. Navigate to C:\Program Files (x86)\SQLite.NET\bin and enter the following commands: gacutil /if SQLite.Designer.dll
    gacutil /if System.Data.SQLite.dll
    gacutil /if System.Data.SQLite.Linq.dll
  12. Proceed until you have registered every dll with the GAC. After you are done there, go to the bin\x64 directory and repeat again.
    gacutil /if SQLite.Designer.dll
    gacutil /if System.Data.SQLite.dll
    gacutil /if System.Data.SQLite.Linq.dll
  13. Download an SQLite adminstration program, such as http://sqliteadmin.orbmu2k.de/ and create a database.
  14. Start Visual Studio and use Server Explorer to view this database.
  15. Create a .NET 4.0 application project.
  16. Add an App.config file to your project, by right-clicking over your project and inserting a new item: Application Configuration File.
  17. To App.config, add the following line, directly under the configuration level:
    startup uselegacyv2runtimeactivationpolicy="true" supportedruntime="" version="v4.0" blockquote="">
  18. Add a new component: ADO.NET Entity Data Model. Give it an appropriate name.
  19. Choose to generate the model from the database you previously created.
  20. Give the name something useful. You will be typing this namespace often.
  21. Then select the tables you want and choose plural names if you want (recommended.)
  22. Your model will then appear as an opened edmx file.
  23. Include a reference to System.Data.Linq.
  24. Go ahead and create a function and start typing your linq code.
  25. i.e. from d in (new MyEntities()).myTable select d.myValue.

Have fun!



Monday, March 5, 2012

Die Singletons, Die!

Why, oh why? I love them. What could be more easy to understand than this?
class Singleton
{
public:
Singleton& Instance();
void DoSomething();
private
Singleton *_instance;
}
void func()
{
Singleton::Instance().DoSomething();
}
For one thing, what about threading? What if two threads instantiate the Singleton at the same time? Well, you could use a critical section, mutex or semaphore to serialize access. Problem solved? Yes, but still the object is created and when does it call its destructor? If I look above, the object is instantiated but not freed until the application terminates. It also allows any class to access this singleton without restriction; its methods are public. The truth is that the life-cycle as well as access is not clear, not tight, not proper OOP.

OOP methodology calls for something better. Objects must maintain tight scope; meaning they should instantiate and destruct in a predictable order. Singletons violate this principle, by letting any class instantiate and access the public methods, on demand. This can lead to difficult-to-find bugs.

There is another more OOP acceptable solution: The collaborator pattern. In this, the order of access is tightly controlled. The collaborator pattern gives a hierarchy of control to each class. For example: Instead of a singleton, we simply make singleton a friend of a parent class, which we will call Application. Application then contains a function to return the current instance of the singleton. This works on down the line, from parent to child. Each class is then assigned a responsible parent class.

class FakeSingleton
{
friend Application;
public:
DoSomething();
private:
FakeSingleton();
}

class Application
{
public:
FakeSingleton &fakeResource()
{
if(_fakeSingleton==null)
_fakeSingleton = new FakeSingleton();
return *_fakeSingleton;
}
~Application() {
if(_fakeSingleton !=null)
delete _fakeSingleton;
}
private:
FakeSingleton *_fakeSingleton;
}

Another variation is to not return an instance, but control access through wrapper functions.

This pattern therefore allows us to determine the sequence of initialization, as well as maximum lifetime. But what if we want to free the resource upon last use? A factory class can be useful in this case. A factory class, wraps the collaborator object. When this wrapper destructs, it frees the collaborator objects when the usage count goes to zero.

void Func()
{
// _fakeSingleton count incremented.
FakeSingletonFactory fakeSingleton();
fakeSingleton->DoSomething();
// _fakeSingleton count decremented, possibly freed.
}

Another nifty trick is the idea of the stack-only class:
class A{
private:
void * operator new(size_t size) {}
};

int main(){
A a;
A * b;
A * c = new A;
}

With this class, we are able to wrap our object in a stack variable.

I will post a full example on CodeProject, showing different ways the collaborator pattern can be used in C++, C# and PHP.

Thursday, February 23, 2012

PHP vs. ASP.NET - An Expert's Opinion

I've seen many comparisons on the Internet between PHP and ASP.NET, however none coming from the perspective of an expert in both languages. I will attempt in this article to explain the pros and cons of each language and then come to a conclusion about which is better.

Firstly, I started off as a assembly language programmer way back in 1983. I then moved to C in 1985 and then to then to C++ starting in 1992. I did not become a serious C++ programmer until about five years ago. I have, since 2001, programmed equally between C++, C# and PHP. I have taken on some serious projects with Silverlight, WPF, VB, VB.NET and MS Access as well. My SQL skills are fairly good as well, and can pretty much hold my own among the experts. I briefly became a full time PHP programmer back in 2002, migrating from pure HTML with some javascript. PHP was the logical choice, as it was superior to ASP and Perl (which were the only other practical choices at the time.) I wrote many websites during this time.

Recently, I became a part time PHP programmer on a much more complicated project. I program as well in C++ and C# (WinForms,WPF,ASP.NET) during this time. I have had to spend a tremendous amount of time on Javascript as well. I could write many articles on how horrible Javascript is, but that is not the point of this article.

Switching between the two languages, PHP and ASP.NET is very interesting.

PHP has the following obvious advantages:
1. It runs on any platform without the need for compiling.
2. It has a large library of relatively easy-to-access functions.
3. It is C-like, so it's not hard to use.

ASP.NET has the following obvious advantages:
1. Once compiled, runs very quickly and normally runs several times faster than PHP.
2. No significant performance hit for things like comments and abstractions.
3. The editor and debugger are very advanced.
4. C# and VB.NET are both extremely powerful object oriented languages with such features as Linq.
5. Code can be shared between over .NET projects, such as WinForms, Windows Services, Silverlight and WPF applications.

PHP has the following disadvantages:
1. It runs extremely slow compared with ASP.NET. Heavy use of caching is made to speed PHP up. However, not everything can be cached, such as searches.
2. Can not be easily extended. One must compile a module into an extension.
3. Editors are not as good as ASP.NET editing and debugging.
4. Performance hit for abstraction and documentation.

ASP.NET has the following disadvantages:
1. Runs on Windows only (Mono can work on Linux, but too limited to be practical).
2. Must be JIT compiled (only takes a hit the first time it runs.)

So, the question still remains, which one is better? Which is the clear winner? In a perfect world, I would chose WPF and C# with dabbling into C++ as necessary. However, we are not living in a perfect world and the reality is that many requirements are for cross-platform, browser based applications. And this is the reason that PHP is still more popular than ASP.NET.

But is it better? Better in every other sense.... no. PHP, may work very well for small projects and simple code, but once the requirements start to become complex, PHP becomes extremely problematic. Dynamic types, despite what people say, are sometimes extremely annoying and sometimes you want a compiler to find errors before you get into testing.

With ASP.NET, I can abstract functionality, and separate it into well defined modules and take no performance hit for doing so. With PHP, I must break classes into separate files and load the entire file, even if I only want one function in the class.

I also take a small performance hit for every comment I make in PHP. In ASP.NET there is no such problem. However the pressure to optimize PHP, makes it relatively unreadable and with dynamic typing, it becomes very hard to enforce rules in code.

But really to measure performance of a language, I look at how much longer something takes to do in one language than another, and then how long it takes to maintain. ASP.NET, in general develops faster than PHP and I can share code with my desktop application or service. I like that I can organize ASP.NET better. Although it takes more time to organize, it pays off with a shorter maintenance time. People understand my code better when written in ASP.NET. The editor is also superior to any PHP editor on the market (free or commercial.)

The one big annoyance with ASP.NET is HTML. The interface to HTML is quite cumbersome. Although there is nothing preventing an ASP.NET programmer from programming like a PHP programmer (using MVC) the majority of projects make heavy use of well developed non-MVC controls. There is generally a longer learning curve with this technology, although for the most part, it pays off.

Again, if your project is complicated and your budget is limited, I would stick to ASP.NET. It's worth the extra cost to use a Windows server. If your project is simple, it really doesn't matter which you use. PHP will run a bit slower, but it will be more portable and can run on just about anything. If you need performance and portability, Mono may be an interesting choice, as long as you are willing to live with the limitations.