FusionArtWorks

What do you think of modern art?

This is a short post to introduce a promising digital art studio based in London, FusionArtWorks.

It is art meeting digital art, they create a unique interpretation of any artwork that you can imagine. See a sample of their work below or on their Art Gallery online.

Marilyn is Alive

Marilyn is Alive – Unique Canvas ArtWork in Klimt Style

La Dame de Vie - Unique Canvas ArtWork in Klimt Style

La Dame de Vie

Kissing the Sun

Kissing the Sun – Unique Canvas ArtWork in Miró Style

Applemoon

Applemoon – Unique Canvas ArtWork in Cézanne Style
 

Tired of Firefox fast releases?

Firefox has been releasing major versions every 6 weeks since April 2011 going from a version 4 to version 20 (time of writing). I want to present some numbers in terms of versions and browser market share of what happened since then.

What was the goal of the fast releases? Wikipedia quotes:

« The stated aim of this faster-paced process is to get new features to users faster »

The features have been added but in the busy life of users and workers it’s not the priority, we just need a browser that works, gets faster and doesn’t get in the way. I’m speaking as a user and also as a developer who loved Firefox in the past. Honestly, from a user perspective it does not appear as loads of features. Back in 2011, there were lots of articles explaining the move and possible issues, good analysis and thoughts and user concerns.

Let’s get into the numbers. I prepared a graph that shows the version evolution over time, it uses the data form the History of Firefox in Wikipedia.


Observing at the version release, one can clearly see that the release-pace is now very fast, specially compared to previous releases. In despite of this effort the market share continued decreasing. See below the market share graph, the data is from StatCounter aggregated by Wikipedia (desktop and mobile combined).

Firefox and Internet Explorer lost market share to Chrome. However IE was loosing faster till early mid 2012 where the decreasing pace of Firefox and IE is similar. What is interesting is to see the lines of Firefox and Chrome, is almost the opposite since early 2011, when the fast-releases started.

Maybe, you’re wondering what happened before 2011. The following graph is a plot of browser market share since 2007, the data used is from W3Counter.

One can clearly see that the browser that lost the most is Internet Explorer. Firefox is yet decreasing but in Mid 2010 had more than 30% market share. The 10% lost from then up to now looks steady (decreasing) and there is no any hint that fast-releases helped or affected Firefox.

What would it take for users to get excited about Firefox (again)? These are my thoughts:

    1. Silent upgrades. Do not ask to press a button to upgrade every 6 weeks. There are other means to catch user attention and hence loyalty, opening a tab after an upgrade as Firefox does, it’s a good one. Or even to put the version number in clear sight in the User interface.
    2. Javascript performance, this would definitely be a killer, a reason why currently Chrome is my browser choice. This seems to be coming with OdinMonkey (only 2 times slower than native code) in Firefox 22, in june 2013.

What about you? Do you use Chrome, Firefox, both?

Apache Server: KeepAlive, good practice

The KeepAlive parameter is more important than you might actually think, we learnt the hard way. Many good articles have discussed the KeepAlive topic (e.g. KeepAlive On/Off) with good arguments. Here I’ll present rather a summary.

I personally set up our apache server for development machine in EC2/VPC (Amazon Web Services). I set up the KeepAlive parameter to On because we were concerned with speed more than with memory. The server was running fine with sometimes Apache taking spikes in memory usage. The problem started when our blog began to increase traffic and suddenly it was practically impossible to connect  to the server, it was running low in memory with Apache taking up to 1GB in memory! And as usual in the worst moment! The first time, a graceful restart of Apache fixed the problem but it was just getting worse.

How much traffic did it take? Not that much, a simple Apache Benchmark test with 20 concurrent connections and 50 queries was enough to render the server unusable.

ab -c 20 -n 50 http://our-server/

Just imagine several (smart) spiders hitting your server, that would be it.

Then, I had to set the KeepAlive value to Off, the requests take a bit longer but our server now can take much more. You can fine tune the worker values too.

sudo vi /etc/httpd/conf/httpd.conf; # in CentOS and set: KeepAlive Off

Conclusion, if your going to expose your server to a broad audience set KeepAlive to Off. However, if you rather control the amount of traffic and the authorised host/machines that connect to your server, then you might set KeepAlive to On.

As a final advise, block common attacks to your server. I personally use the script that is also used in phpmyadmin. See below, you can add it to your VirtualHost configuration or to an .htaccess file.

# Allow only GET and POST verbs
RewriteCond %{REQUEST_METHOD} !^(GET|POST)$ [NC,OR]
# Ban Typical Vulnerability Scanners and others
# Kick out Script Kiddies
RewriteCond %{HTTP_USER_AGENT} ^(java|curl|wget).* [NC,OR]
RewriteCond %{HTTP_USER_AGENT} ^.*(libwww-perl|curl|wget|python|nikto|wkito|pikto|scan|acunetix).* [NC,OR]
RewriteCond %{HTTP_USER_AGENT} ^.*(winhttp|HTTrack|clshttp|archiver|loader|email|harvest|extract|grab|miner).* [NC]
RewriteRule .* - [F]

Hope it was useful and do not forget to optimize your server!

Amazon EC2 Attach Elastic Network Interface (VPC)

We are running our Virtual Private Cloud (VPC) using Amazon Web Services (AWS). In this particular case we wanted to add an extra network interface to one of our servers which is an Amazon AMI (CentOS). This should be an easy task that should take 10 minutes but you need to be careful, otherwise you’d spent much more time.

The first step is to attach an Elastic Network Interface (ENI) to your instance. This can be accomplished using the Amazon console, first, creating a new interface (if you don’t have one available) and then attach it either when launching a new instance or when your instance is stopped.

Next, attach an Elastic IP Address to your interface (if you need to). Up to this point your instance should have two network interfaces, each with an elastic IP.

Connect to your VPC instance as usual and run a ifconfig command in your shell to see the list of interfaces.

ifconfig

What you see next is the default interface (eth0) and the loopback one. then you need to create an interface manually (e.g. eth1) which is rather straightforward.

Edit: There is no need to create manually an interface configuration, Amazon provides the ec2-net-utils to do that task for you. Issue the command below to configure and bring up your new interface.

sudo ec2ifup eth1  
ifconfig

You shall see your new interface listed as eth1 as in the image below (or eth2 as it might be the case) and it’s now reachable from other hosts in the same subnetwork (or the internet if using an elastic IP attached to it).

If you do a mistake you can bring the interface down with the command counterpart:

sudo ec2ifdown eth1

.

Run the commands below to add a network interface to your instance.
cd /etc/sysconfig/network-scripts
sudo cp ifcfg-eth0 ifcfg-eth1
sudo vi ifcfg-eth1

The *only* parameter you need to change is the device
DEVICE=eth1
(i.e. setting the name). Save the changes and quit the editor.
Finally, it only remains to restart the network:
sudo /etc/init.d/network restart
If you run again
ifconfig
, you should now see your new interface in the list.

For a longer explanation on how to setup an interface in your instance, have a look at this tutorial.

That’s it, you should now have both of your interfaces working but for security, check if you can connect/reach both interfaces (e.g eth0 and eth1 in the example). If you ever find a problem such as only one interface being reachable/functional then go to your Amazon console Detach and Reattach the new interface (ENI) and run the ec2ifup command, check also your firewall settings.

For a longer explanation of ENIs and the ec2-net-utils you can refer to the EC2 user guide.

Thanks for reading, hope it helps somebody

preg_replace compilation failed (Amazon EC2, WordPress)

We have an Amazon EC2 AMI (Linux) hosting the latest WordPress installation (3.4.1) at this point in time. All the configuration was running fine till an upgrade that introduced a PHP warning:

« Warning: preg_replace(): Compilation failed: unknown option bit(s) set at offset 0 »

The first time I noticed it, I didn’t pay that much attention since the WordPress administration was still working besides the warning but some other functionality like the shortcodes was not.

The problem arose when we needed the GD library to generate png and jpg images on the fly which, is used in a WordPress theme. By doing a simple install of the library (php-gd.x86_64, for our system) several dependencies were upgraded to newer versions (php, php-cli, php-common, php-mysql, php-pdo). The install finished normally without any warnings to be worried about.

After a while digging in Google, I found this post which was having a similar problem and most importantly it hinted that the problem was the PCRE library (libpcre), which was being loaded differently (different versions) in the apache and commandline modes. Running the php code below in command line and a web page would show that:

php_info();

The same post mentioned that a libpcre upgrade would fix the problem. Thus, I went for the upgrade from version 7.8-3.1.8 to version 8.21-5.3 in our Amazon Linux instance running the following commands:

sudo yum search libpcre # Find the library
sudo yum install pcre.x86_64 # Install the right library.

If you’re running a different linux system other than the Amazon Linux AMI or CentOS you might refer to your docs. For example in Ubuntu the package manager is apt-get.

This should never have happened if the dependencies were checked correctly during the GD library install. Maybe the libpcre version was not considered as needing an upgrade. Anyhow, I hope this helps others to save some time fixing this problem.

If you have any comments or further information on this issue, I’d appreciate if you let me know.

Why do we need Flash?

Flash is a technology that most people surfing the web is aware and used to, at least 99% PC-users have installed the Flash Player runtime according to Adobe by July 2011. Although those numbers are from a single source and it happens to be the Company behind it itself, it is possible to say they are credible enough and some other sites provide similar statistics too (e.g. RiaStats).

The applications for Flash have changed over time, in the beginning developers were using it for animations and banners all the way to play online video and all the major online video sites still use it widely, namely YouTube, Yahoo Video, Bing Video, Daily Motion, Vimeo and others. Most developers will agree that the best use of Flash is for online video and that’s thanks to its streaming capabilities using the Real Time Messaging Protocol (RTMP). However since the era of HTML5 which can embed videos using the video tag, some players (e.g. YouTube HTML5 Player) have migrated or offered both alternatives. However there is more, Flash has gone far beyond after the Rich Internet Applications (RIA) offering development environments and APIs such as Flex Builder based on the Flex SDK.

Thanks to all the tools and resources available to build flash content, it is possible to achieve stunning visual results. However, and most authoring people will agree, it is necessary to build flash content carefully. That means, to understand the basis of the Flash Players and how the content is executed. Developers should not forget that the code is executed on the client side since the players are just plugins executed inside the web browsers and that the computer memory and processor will be taxed during the execution. There is nothing more disturbing that being surfing quietly on the web and to land on a site that has flash content that makes your computer over-boost and give you the feeling that your computer is trying to take-off. It is not completely the problem of the technology used to build flash content or execute it (players) but the way the content has been produced/written. As usual, latent problems open the doors for new opportunities and solutions and this time has not been an exception. Some decided to migrate to HTML(5)/CSS(3) and AJAX, even of the pain that causes to make it cross-browser compatible and some others have gone even wilder developing browser plugins to block flash content such as FlashBlock for Chrome or FlashBlock for Firefox which are not just used by few but millions all over the internet.

So, Why do we need Flash? We need it to run the existent content that has been authored for flash. Although lots of sites have migrated or created a non flash version of their content to allow devices which do not support flash to visualize the content and those are mainly iOS devices (iPad, iPhone, iPod) and devices with small screens. But, is it possible to achieve the same results as in Flash using HTML5/CSS/Javascript? It depends, if you’re just after visually good looking sites or Apps with animations and effects, yes, CSS and Javascript can do that for you. However if you are writing RIA applications, it depends on what you want to leverage, your skills and background; you have plenty of frameworks, libraries and components available for Flash (AS2/AS3) as well as for Javascript (jQuery based, MooTools, Node.js). Personally, I would go on the side of the standards (HTML5/CSS3/Javascript). after all, Adobe is one of the active supporters of HTML5.

So, What do you need Flash for?