Article: Important update about Gallium3D
Recently AROS a mostly AmigaOS 3.1 compatible operating system has gained hardware accelerated 3d on Nvidia hardware ranging from the Geforce2 up to the 7000 series this however is unheard of in hobby and amateur operating systems!
While Gallium3d currently has a few bugs on AROS that can be seen by comparing the videos of a demo running in software mode and again with the hardware driver. The hardware driver seems to have trouble with rendering some textured 3d ojects from what I can tell.
Gallium3d is the next generation core of the The Mesa 3D Graphics Library which runs on most operating systems allowing cross platform 3d development whether it be games, 3d cad tools, virtual reality or technical demos. The big advantage with Gallium3d is that it is much more modular than previous versions of Mesa meaning that for Mesa to be ported to a new OS all that is needed is to write the operating system specific componetnts and add support for the hardware dirvers to your operating system. This was possible with older versions of mesa but now with galluim3d the same drivers can accelerate multiple APIs such as OpenCL, OpenVG, Clutter, OpenGL ES and of course OpenGL. Whereas in the past it would have taken much more code to enable all those APIs even with mere software rendering.
While most Galluim3d/Mesa development goes on for the Linux platform another OS besides AROS that is already getting a port is the Haiku operating system. Haiku is a BeOS alike operating system that aims to please desktop and workstation users. It has good compatibility with most older BeOS software and has some tricks of its own now as well I may blog more specifically about it sometime in the future. I have been building test images of the Haiku OS for quite some time since I found out about it last year and the developers are progressing quite quickly even though it is a small project. A screenshot of the gallium3d software renderer on haiku can be found here.
One of the coolest things about AROS is how fast it is for instance it takes mere seconds for it to do a warm boot! AROS also has a webkit based browser which is the same engine in Apples safari browser that supports most websites although Adobe flash only works on Windows, Linux and Solaris or an operating system that emulates those such as FreeBSD.
If you would like to try out AROS you can even run it inside windows as an applicaion with windows as the host! Or you can test out the more complete AROS derived distro called Icaros
Saturday, October 31, 2009
Sunday, October 25, 2009
Journal 9, October 25
Article: LLVM 2.6 Released, Clang Is Now Production Ready
While it won't directly impact most people LLVM's latest release is a significant accomplishment. I think the feature that stands out the most is the vastly improved error messages which will help developers write better code faster. I have found from my own experience that a subtle error in a program can take far more time to figure out than writing the bulk of the program itself. I think that is often due to poor wording of errors or just plain not giving an error message.
In its latest iteration LLVM 2.6 offers production quality C and Objective-C support with speeds of upto 3 times faster during compilation than GCC4, which is the current standard compiler for many projects across a variety of operating systems, so developers can not only find bugs faster but rebuild their projects with fixes faster too.
Although LLVM which stands for Low Level Virtual Machine only fully supports C and Objective-C fully at the moment other projects are also making progress such as C++ support and even more unusual projects such as compiling php code with Roadsend PHP to native binaries for increased speed or in other words lower CPU requirements for heavily used websites.
What makes LLVM so desirable for many projects is the way it breaks the components down into modules so to add support for a new language to LLVM all that is needed is to write a front end for that language instead of having to write a complete compiler. And of course when your frontend is finished you also get the optimization from LLVM for free. The same goes for the backend if you wanted to add support for a new type of processor once the backend is complete you can compile code written in any language LLVM supports.
The push to use LLVM is huge with Apple already using it for optimization of their opengl graphics stack. FreeBSD and DragonFlyBSD are already actively working with the LLVM to get thier entire OS compiled with LLVM mostly due to better features than GCC and also more compatible licensing.
LLVM is for most people a behind the scenes change but those affect everyone as well. With its BSD like open source license which allows both open source contribution and also closed source modification it may even be adopted into commercial compiler suites as it becomes more stable. So if you need a headache free C/Objective-C Compiler or want to modify it for your own inhouse use check out LLVM!
Read all about it at llvm.org
While it won't directly impact most people LLVM's latest release is a significant accomplishment. I think the feature that stands out the most is the vastly improved error messages which will help developers write better code faster. I have found from my own experience that a subtle error in a program can take far more time to figure out than writing the bulk of the program itself. I think that is often due to poor wording of errors or just plain not giving an error message.
In its latest iteration LLVM 2.6 offers production quality C and Objective-C support with speeds of upto 3 times faster during compilation than GCC4, which is the current standard compiler for many projects across a variety of operating systems, so developers can not only find bugs faster but rebuild their projects with fixes faster too.
Although LLVM which stands for Low Level Virtual Machine only fully supports C and Objective-C fully at the moment other projects are also making progress such as C++ support and even more unusual projects such as compiling php code with Roadsend PHP to native binaries for increased speed or in other words lower CPU requirements for heavily used websites.
What makes LLVM so desirable for many projects is the way it breaks the components down into modules so to add support for a new language to LLVM all that is needed is to write a front end for that language instead of having to write a complete compiler. And of course when your frontend is finished you also get the optimization from LLVM for free. The same goes for the backend if you wanted to add support for a new type of processor once the backend is complete you can compile code written in any language LLVM supports.
The push to use LLVM is huge with Apple already using it for optimization of their opengl graphics stack. FreeBSD and DragonFlyBSD are already actively working with the LLVM to get thier entire OS compiled with LLVM mostly due to better features than GCC and also more compatible licensing.
LLVM is for most people a behind the scenes change but those affect everyone as well. With its BSD like open source license which allows both open source contribution and also closed source modification it may even be adopted into commercial compiler suites as it becomes more stable. So if you need a headache free C/Objective-C Compiler or want to modify it for your own inhouse use check out LLVM!
Read all about it at llvm.org
Sunday, October 18, 2009
Journal 8, October 18
Article: Line sharing best solution for slow, expensive US broadband
The Internet has become quite pervasive in our way of life in the US and even in other countries. But the fact of the matter is that Internet service in the US isn't what it should be. For instance up until a month or two ago the fastest Internet connection I could get for under $50 was dialup which has been obsolete for years. Even now with 1Mbs cable Internet for about $25 available there isn't any competition going on since there aren't any other providers.
Policy and legislation aside, I think infrastructural challenges are what have kept faster Internet from coming my way for a good while since I am beyond the maximum range for DSL from the CO. I am not alone here either since nearly half the population in my area lives out of town and must either rely on cable Internet usually with only one choice of provider or in the absence of cable use dialup or the prohibitively satellite Internet.
The reason I think 1Mbs Internet is still an outrageously high price is for that price people living in the UK can get TV, Internet and phone service for less that $30 total from SKY. In the US the prices on comparable services are at least 3 times higher! I don't think the prices are high due to a lack of competing technologies but rather due to a lack of competing service providers. The technology used really matters very little once you think about it.
Another misconception that was cleared up by the report was areas with higher populations densities are the primary contributor to faster Internet. According to the article reports show that some countries such as Japan, Korea and the Netherlands are far outperforming what mere population density advantages would predict.
I think that if the US were to adopt more open and competition inducing policies we would see faster Internet service and better broadband Internet availability. The reason being that companies would be forced into competition that currently aren't really competing since like in my case I have no choice for broadband except one company. While the Internet is not the answer for everything and certainly has its rough spots its a shame that many areas in the US are getting left behind technologically. The Internet was designed to be and still is an excellent educational tool without which many homes will likely be poorly equipped to compliment learning done at school.
Original Berkman Center Research Paper
The Internet has become quite pervasive in our way of life in the US and even in other countries. But the fact of the matter is that Internet service in the US isn't what it should be. For instance up until a month or two ago the fastest Internet connection I could get for under $50 was dialup which has been obsolete for years. Even now with 1Mbs cable Internet for about $25 available there isn't any competition going on since there aren't any other providers.
Policy and legislation aside, I think infrastructural challenges are what have kept faster Internet from coming my way for a good while since I am beyond the maximum range for DSL from the CO. I am not alone here either since nearly half the population in my area lives out of town and must either rely on cable Internet usually with only one choice of provider or in the absence of cable use dialup or the prohibitively satellite Internet.
The reason I think 1Mbs Internet is still an outrageously high price is for that price people living in the UK can get TV, Internet and phone service for less that $30 total from SKY. In the US the prices on comparable services are at least 3 times higher! I don't think the prices are high due to a lack of competing technologies but rather due to a lack of competing service providers. The technology used really matters very little once you think about it.
Another misconception that was cleared up by the report was areas with higher populations densities are the primary contributor to faster Internet. According to the article reports show that some countries such as Japan, Korea and the Netherlands are far outperforming what mere population density advantages would predict.
I think that if the US were to adopt more open and competition inducing policies we would see faster Internet service and better broadband Internet availability. The reason being that companies would be forced into competition that currently aren't really competing since like in my case I have no choice for broadband except one company. While the Internet is not the answer for everything and certainly has its rough spots its a shame that many areas in the US are getting left behind technologically. The Internet was designed to be and still is an excellent educational tool without which many homes will likely be poorly equipped to compliment learning done at school.
Original Berkman Center Research Paper
Friday, October 9, 2009
Journal 7, October 11
Article: Harvard's Robotic Bees Generate High-Tech Buzz
At Harvard University an ambitious project has been started to create a tiny colonisable robotic bee. The RoboBee Project as they are calling it has been granted 10 Million dollars from the National Science Foundation toward their goals.
The project seems to have similar goals as the DelFly only on a far smaller scale. The small size would make them less noticeable. Their low visual profile could make them useful in covert military operations.
I imagine that for systems such as this to see any widespread use other than as children's toys the amount of time that can be spent in the air must be vastly improved. For instance the Delfly 2 can only hover for 8 minutes or 15 minutes horizontal flight. A recent leap in battery technology, which I bet has left many chemical engineers slapping their heads that they hadn't thought of it sooner, may allow for this its called lithium-air battery technology and the air around the battery is used as part of the cathode for theoretical gain of up to 10x in capacity over standard high capacity lithium-ion batteries. The major gain for small flying robots in such batteries comes from the fact that the cathode is air and doesn't weight down the robot.
There could be some dangerous implications if the technology got into the wrong hands such as remote spreading of infectious disease without notice. Of course that doesn't mean we should live in fear. Quite the contrary life and progress must go on. I mean even today a flyby of a model aircraft spreading its payload over a small area might not even be noticed at all.
Other than terrorism I think the obvious danger to such small remote devices is privacy. Imagine if such devices were deployed everywhere similar to how CCTV is in Britain. I think security cameras are fine things to have in stores. They are often used by police to help track down thieves but when the if camera starts following you around you could hardly be called paranoid if it makes you feel uneasy.
On the other hand if these RoboBees were unleashed into a field and equipped with cameras and lasers I think it would make for a great online flying laser tag experience. It would probably have to be hosted in local areas however due to latency issues over the Internet. I think it would probably be more feasible than OnLive. Onlive would have ridiculous hardware costs for rendering and streaming game content requiring local area hosting relative to players to keep the enormous amounts of data off the Internet backbones and maintain low latencies.
While I don't think the will be of much practical use for people other than surveillance I think they would probably sell like hot cakes for a couple years, assuming they cost under $100, and raise the technological bar set in peoples minds yet again.
So have fun making Robobees Harvard but please don't give them stingers!
At Harvard University an ambitious project has been started to create a tiny colonisable robotic bee. The RoboBee Project as they are calling it has been granted 10 Million dollars from the National Science Foundation toward their goals.
The project seems to have similar goals as the DelFly only on a far smaller scale. The small size would make them less noticeable. Their low visual profile could make them useful in covert military operations.
I imagine that for systems such as this to see any widespread use other than as children's toys the amount of time that can be spent in the air must be vastly improved. For instance the Delfly 2 can only hover for 8 minutes or 15 minutes horizontal flight. A recent leap in battery technology, which I bet has left many chemical engineers slapping their heads that they hadn't thought of it sooner, may allow for this its called lithium-air battery technology and the air around the battery is used as part of the cathode for theoretical gain of up to 10x in capacity over standard high capacity lithium-ion batteries. The major gain for small flying robots in such batteries comes from the fact that the cathode is air and doesn't weight down the robot.
There could be some dangerous implications if the technology got into the wrong hands such as remote spreading of infectious disease without notice. Of course that doesn't mean we should live in fear. Quite the contrary life and progress must go on. I mean even today a flyby of a model aircraft spreading its payload over a small area might not even be noticed at all.
Other than terrorism I think the obvious danger to such small remote devices is privacy. Imagine if such devices were deployed everywhere similar to how CCTV is in Britain. I think security cameras are fine things to have in stores. They are often used by police to help track down thieves but when the if camera starts following you around you could hardly be called paranoid if it makes you feel uneasy.
On the other hand if these RoboBees were unleashed into a field and equipped with cameras and lasers I think it would make for a great online flying laser tag experience. It would probably have to be hosted in local areas however due to latency issues over the Internet. I think it would probably be more feasible than OnLive. Onlive would have ridiculous hardware costs for rendering and streaming game content requiring local area hosting relative to players to keep the enormous amounts of data off the Internet backbones and maintain low latencies.
While I don't think the will be of much practical use for people other than surveillance I think they would probably sell like hot cakes for a couple years, assuming they cost under $100, and raise the technological bar set in peoples minds yet again.
So have fun making Robobees Harvard but please don't give them stingers!
Saturday, October 3, 2009
Journal 6 , October 3
Article: Red Hat addresses Supreme Court on software patents
RedHat a Linux vendor based here in North Carolina is stepping up to the plate asking the Supreme Court to recognize that software is not patentable. RedHat has a long history of innovation and contribution to the free software community.
Though many people man not notice patents affect everyone. On nearly every thing you by there is printed the patent number, several patent numbers or even patent pending if it went into production before the patent registration was complete. The US Patent and Trademark Office defines a patent as "Patents protect inventions, and improvements to existing inventions. ". Although the US patent office defines what a patent is there is still some debate about patents and software.
Many people think that patents should only apply to machines or devices that take some input and provide another output. Which is how people generally think of patents. However, in the past years patents have also attempted to include software. I personally see this as a problem and many others do as well.
The problem with patenting software is that it limits innovation and progress which is what patents were originally designed to improve. The reason I think software patenting inhibits growth is that even though a certain software feature is patented it should be reimplementable in a different way unlike how it is today where if your software has a feature and it is patented if can't be used by other sofware. A recent example would be OpenGL 3 in the popular Mesa software rederer that has hit a snag a fully supporting OpenGL 3 because floating point textures and a few other features are patented.
I guess this is all due to the mentality that whatever you see on the screen is the software. The problem with that is software is much more than that a lot goes on behind the scenes and if a company wants to reimplement a piece if software with different workings internally they should be able to do that.
Of course you can look at the wine project and see a healthy example of this very thing happening. Microsoft of course owns the copyrights to the windows source code and can do whatever it likes with it. However since it fall under a copyright as software it can be reimplemented differently by someone else. This to some degree lessens monopolization and I think is healthy to the software ecosystem.
A few examples of the sort of things that pop up in software patents.
Google's lauch page
Apple's 3d desktop patent
Patent on drawing a cursor you can always see with the XOR fuction
Patent on saving the image data behind a window
As you can see another problem with software patents is they often cover the obvious best method to do something. How can progress continue if we are constantly being forced to discover different ways do do things worse? And in the case of google's web page surely they should have applied for a trademark on their logos so people would be able to tell it is their page and not patent the web page. As it stands every kid on the block with a text editor and a homepage is up to be sued should google find thier page a bit too much like thiers. I personally think its a free country and should I wish to make a webpage with a logo a search box and two buttons I should be able to do that without hearing from the likes of google.
I applaud RedHat and the Open Invention Network that is also working toward freeing software from patents. It will allow developers to once again develop software without the worry they are encroaching on some company's IP. Writing software will return to its rightful status as an art form like writing a book or painting a picture and not like designing a piece of hardware.
RedHat a Linux vendor based here in North Carolina is stepping up to the plate asking the Supreme Court to recognize that software is not patentable. RedHat has a long history of innovation and contribution to the free software community.
Though many people man not notice patents affect everyone. On nearly every thing you by there is printed the patent number, several patent numbers or even patent pending if it went into production before the patent registration was complete. The US Patent and Trademark Office defines a patent as "Patents protect inventions, and improvements to existing inventions. ". Although the US patent office defines what a patent is there is still some debate about patents and software.
Many people think that patents should only apply to machines or devices that take some input and provide another output. Which is how people generally think of patents. However, in the past years patents have also attempted to include software. I personally see this as a problem and many others do as well.
The problem with patenting software is that it limits innovation and progress which is what patents were originally designed to improve. The reason I think software patenting inhibits growth is that even though a certain software feature is patented it should be reimplementable in a different way unlike how it is today where if your software has a feature and it is patented if can't be used by other sofware. A recent example would be OpenGL 3 in the popular Mesa software rederer that has hit a snag a fully supporting OpenGL 3 because floating point textures and a few other features are patented.
I guess this is all due to the mentality that whatever you see on the screen is the software. The problem with that is software is much more than that a lot goes on behind the scenes and if a company wants to reimplement a piece if software with different workings internally they should be able to do that.
Of course you can look at the wine project and see a healthy example of this very thing happening. Microsoft of course owns the copyrights to the windows source code and can do whatever it likes with it. However since it fall under a copyright as software it can be reimplemented differently by someone else. This to some degree lessens monopolization and I think is healthy to the software ecosystem.
A few examples of the sort of things that pop up in software patents.
Google's lauch page
Apple's 3d desktop patent
Patent on drawing a cursor you can always see with the XOR fuction
Patent on saving the image data behind a window
As you can see another problem with software patents is they often cover the obvious best method to do something. How can progress continue if we are constantly being forced to discover different ways do do things worse? And in the case of google's web page surely they should have applied for a trademark on their logos so people would be able to tell it is their page and not patent the web page. As it stands every kid on the block with a text editor and a homepage is up to be sued should google find thier page a bit too much like thiers. I personally think its a free country and should I wish to make a webpage with a logo a search box and two buttons I should be able to do that without hearing from the likes of google.
I applaud RedHat and the Open Invention Network that is also working toward freeing software from patents. It will allow developers to once again develop software without the worry they are encroaching on some company's IP. Writing software will return to its rightful status as an art form like writing a book or painting a picture and not like designing a piece of hardware.
Subscribe to:
Posts (Atom)