Archive for July, 2009

Deploying WebLogic on Linux

The rising business trend toward using open source software platforms has brought an increase in the number of critical applications deployed on Linux and BEA WebLogic. For many organizations, in fact, WebLogic deployments are their first major Linux installation.

This article provides an overview of deployment considerations when using a Linux/ WebLogic combination.

Linux deployments span traditional Intel-based servers from grid environments to mainframe systems (IBM’s z/VM with Linux guests for example). In this article we will only cover the Intel architecture; however, almost all the points covered in this article are applicable for non-Intel deployments.

Why Linux?
Why the increasing number of deployments? Linux provides an alternative to proprietary operating systems. It can offer lower cost of ownership for some customers and has a large following of skilled workers. The Linux operating system is highly configurable and the source is usually available, so you can change the behavior or recompile options that are specific for your site. Lastly, a number of vendors support Linux, allowing the customer to pick the application software and hardware that is right for them.

Picking Your Distribution
WebLogic currently supports the major Linux distributions (Red Hat and SuSE). Refer to the BEA site ( for the updated list of supported configurations. Both Red Hat and SuSE contain additional features (like cluster services) that may be useful for your installation. At the time of this writing, Red Hat had just released Enterprise Linux v3, so check on the certification pages for this version of Linux as several important enhancements have been added to the kernel, like Native POSIX Threading Library (NPTL).

Picking Your JVM
BEA’s JRockit JVM can be used on an Intel Linux deployment and can provide many benefits as it supports both 32- and 64-bit environments. JRockit is designed for server-side execution and has advanced features like adaptive optimizations that can improve performance of the application. If you are running on a different platform (zLinux, for example) refer to the BEA supported platform page for the supported JVM.

Installing the JVM (JRockit)
JRockit’s installation is simple: download the installer for your platform, execute the downloaded file (./jrockit-8.1sp1-j2se1.4.1-linux32.bin), and follow the on-screen prompts.

If you’re running on an Intel processor with Hyper-Threading enabled, you will have an extra step once the installation is completed. The cupid for each processor (real and virtual) must be readable by any process; this can be achieved automatically or by changing the /dev/cpu/X/cupid (X is the CPU number) file permissions. Refer to the JRockit Release Notes ( for all the details on enabling this support.

Installing BEA WebLogic
Just as with JRockit, the installation of BEA WebLogic is very simple. Download the distribution for your environment and execute the download (./platform811_linux32.bin). The installer provides a GUI (the default) or console (non-GUI) installation option. If you are installing on a platform without a GUI or on a remote system you can bypass the “-mode=console” option when you start the installer. Either option will walk you through the installation process, which is interactive and allows you to select installation options and the home directory.

A number of factors must be considered when deploying BEA WebLogic on Linux. For example, configuration of the J2EE application server and the surrounding ecosystem must be properly planned so that the best performance can be achieved. Before the environment is deployed, for best performance start the process of maintaining the environment. This preplanning will pay off once the application is in production.

Collecting performance metrics on the application and its supporting infrastructure is very important (even before production). Recording these metrics prior to production enables capacity estimates to be built and also allows a reference baseline to be created so that changes to the application or environment can be validated against the baseline prior to a production deployment.

Once in production, collecting and persisting these metrics allows a performance model to be established.

Most vendors have a service to keep you informed via e-mail about patches and updates. Be sure to sign up for these services and make sure the e-mails go to a number of people within the IT group responsible. After all, if the notifications only go to one user, you can imagine what would happen if that user happened to be on vacation and an emergency patch was posted.

Although some automatic update services are available, I would hesitate to use them and would opt for the notification of updates first. Then you can decide what is applicable for your installation and if any cross-vendor dependencies exist.

Although products from different vendors typically play well together, the combination of your applications and the vendor’s solution may require testing within your environment before a production deployment. Use the measurements taken to compare the performance delta before and after deploying into production.

One tool to consider for your Linux deployments is Tripwire ( Both the open source and commercial variants can be very helpful in identifying the “what changed during the weekend” syndrome. Using Tripwire to create a baseline of the servers can be helpful when used in addition to your change management process to validate software and file consistency or rolling back changes.

Environment Visibility
A BEA WebLogic application often has a number of external touch points that are non-Java. Examples of these are Web servers and databases. The overall performance of the WebLogic application is influenced by how well these other components execute and the overall performance of Linux.

Examples of gathering EPA (Environment Performance Agent; see sidebar, page 10) data include the following;

  • Linux VM data
    - Is too little memory available, causing Linux to swap?
    - How many tasks are pending and what is the load average?
  • Web server data
    - How many http errors occurred between measurements?
    - Are the child threads hung?
  • Database
    - How much space is remaining?
    - What is the cache hit ratio?
  • Network
    - What IP is generating the most requests?
    - Any event alerts on the network?

What Should You Monitor?
This is a loaded question and the answer really depends on the application and your own goals for monitoring and measuring success.

As a general rule of thumb, in addition to the J2EE components within the application, anything that feeds the application, or which the application server relies on to process a request, should be monitored. Review the Environment Visibility section above and consider the touch points your own application has. How do you measure availability and acceptable performance and what are you going to actually do with the data you collect (which is very valuable)?

Why aren’t there any journalistic startups?

I think it’s time that we can all agree that the news industry is failing. Hundreds of newspapers have declared bankruptcy and gone under in the past couple years — and thousands of Journalists are out of work. But I’m curious: what are all these journalists doing? Laying down and giving up? I’m wondering why I don’t see a flurry of journalistic startups.

The state of startups

Call it “Valley Culture” or however the hipsters wants to spin it — there is a definite attitude of Entrepreneurship in California. I’ve lived around it my whole life. People are itching to start companies so bad that they get VCs to give them extraordinary amounts of money for really dumb ideas. I mean, really dumb ideas. Ideas that never had a hope in the world of making money, let alone becoming popular.

My point being: if we can get VCs to put up millions of dollars for practically any idea, why don’t we see more lean journalistic startups? Nothing fancy, just some (good) reporters, editors, and a small syndication (web) team. Editors & reporters generally get paid shit, and you wouldn’t need more than 2-3 tech people to support a couple dozen reporters with today’s technology — so a few million would go a long way.

It’s not the news that’s dying, it’s the news organization

Increasingly I’ve been hearing the same mantra from smart people around the web: It’s not the news that’s the problem, it’s the newsroom. In any modern newspaper, the people producing content (editors & reporters) are a small fraction of the costs. One of my favorite quotes on the subject comes from Mr Gruber:

The question these companies should be asking is, “How do we keep reporting and publishing good content?� Instead, though, they’re asking “How do we keep making enough money to support our existing management and advertising divisions?� It’s dinosaurs and mammals.

The truth is, people are hungry for news. And there’s plenty of money to be made. I don’t see TechCrunch or Mashable hurting for money. And they’re out there just producing bottom-of-the-barrel reporting. Could you imagine how much money someone would make if they had a TechCrunch style news organization with real reporters? People might even start trusting them as a source of information.

So where are the startups?

Maybe it’s me, but the answer seems so clear in my head. We have thousands of unemployed journalists. Good journalists. We have VCs ready to hand out money for a shit sandwich. We have a proven business model. Why don’t I see a flurry of journalistic startups? Get rid of the cruft of the newsroom, give power to the reporters and content producers.

Stop trying trying to grasp onto idiotic ideas like “social news” or stabbing blindly at twitter in hopes of saving an archaic organizational structure. People aren’t buying printed newspapers? Stop printing them. People only want to read their news online? Let them read it online.

America needs to stop concentrating on how to save our dying industries and start concentrating on how to create the next booming industries. Isn’t that what the American dream is all about, anyways?

Introduction to the Dojo Toolkit

Dojo is quite a lot of things. It has a staggering amount of widgets, to begin with; Dialogs, Panes, Menus, WYSIWYG editors, Buttons, Color pickers, Clocks, Layout Managers and a host of other things- just in the widgets department. Then there’s the very handy encryption package, handy for hashing things coming to and from the server-side, the Drag-n-Drop package which works with nearly any element on the page, the essential Collections API with Java-like iterators and whatnot, and of course the powerful  proper Ajax functionality with several bells and whistles of its own.

Apart from the sheer amount of functionality available in dojo, there are a few architectural differences compared to most other frameworks; Dojo uses namespaces. This means that dojo always includes the package names in an object reference. If I want use the very nice for-each loop function, for instance, I have to refer to is like this; “dojo.lang.forEach(listOfThings, myFunc);”, instead of just “forEach(listOfThings, myFunc);”.

It seems to be a lot of trouble and waste a lot of space, but in reality it’s not a big change and it increases readability when debugging or refactoring things later. Another example is that when you want to refer to a DOM element the “dojo way”, you write “dojo.byId(;” instead of prototypes inarguably more compact “$(;”  Another big change in philosophy between dojo and prototype is that prototype has a long and glorious history of changing basic javascript objects, such as adding useful new functions to the string object.

This has resulted in collisions or erratic behavior when using other javascript libraries which want to change or assume a certain functionality of the very same function names. By using namespaces, dojo ensures that no collisions occur between itself and any other libraries on the same page.

I’m going to use the dojo API version 0.4.2 in the examples, since the upcoming 0.9 only has reached milestone 2 as of this writing.

Getting the right stuff and copying the right files to your server

You might think that using a javascript-based framework should be dead simple. In many cases it is, but due to de facto standards set up by many smaller frameworks (or libraries),         some design choices in dojo requires reading some of the fine print – or reading this article :) . The most important thing to remember is that dojo is more that just the file dojo.js. It is     not uncommon for people starting to use dojo to assume that the src/ directory really isn’t needed, and probably is shipped only as a kind of open source service to the developer.

However, when you download and unzip the “standard” dojo package (dojo 0.4.2-ajax), the dojo.js file is only the starting point, the kernel so to speak, of dojo. All real functionality     exists – and exists only - in files under the src/ directory. Also, most widgets have a lot of template html files and images that they have to get at, so the short dance version of this     point is; Copy everything.

Check the test to see how things are done

The biggest problem the dojo community faces (IMHO) is the lack of a thorough API documentation and walk-through examples. True, there’s a very useful (at least to the                 intermediate-level dojo hacker) API tool, and there are several good sites which give fairly up-to-date walk-throughs and examples in specific areas. But the really good bits can be     found on the test directory which also ships with the standard package.

If you go to you’ll see two interesting directories; demo and tests. The reason I refer to the live URL at the dojo             download site is that you might want to poke around at other (upcoming) versions. The demo directory contains a fair number of demos, which are neatly organized in the following     sections; Demo Applications (Mail client, Text editor), Effects (Fades, Wipes, Slides, et.c.), Drag and Drop, Storage, RPC, Layout Widgets, Form Widgets and General Widgets (Buttons, Menus, Fisheye menus, Tooltips, et.c.). This is a good place to let your jaw drop a bit and get some inspiration.

But the really good stuff is found under the tests directory. Here you will find unit tests for almost all widgets, layout containers, graphics, and lots of other things you’ll find truly         useful. The reason the tests are more useful is that they are short, focussed and sometimes even (drumrolll, please) commented! My recommendation is to check out tests/widget for    test_ModalFloatingPanetest_Dialog and test_Menu2_Node for some basic examples on how to use dojo widgets. Although dojo is an Ajax framework, much of the truly sterling functionality it  offers has little if anything to do with client-sever communication – as you will find out.

Quick & Dirty Referral Tracking

Ever wondered where people came from to sign up for your web app? Recently I wanted to track referrals for Tender and wanted something quick and dirty. The only problem? Our setup page is on a different domain than our marketing site. This meant I couldn’t use Google Analytics since it thought every “goal” came from exactly one place: the marketing site.

So instead, what I did was hack together a quick referral script using Javascript to track where people came from and add that to a field on the Site model (each install of Tender is considered a ‘Site’). The Javascript (in MooTools):

var Tracker = new Class({
  tracker: null,

  initialize: function(){

    var field = $('site_referral');
    if (field){
      field.value = this.tracker;

  initCookie: function(){
    this.tracker ='tracker');
    if (!this.tracker || this.tracker == "") this.setTracker();

  // Order of precidence
  // 1. ?source= from in the URL
  // 2. ?utm_campaign= in the URL
  // 3. Referrer / Direct
  setTracker: function(){
    var final_source = document.referrer ? document.referrer : "direct";
    var args = $get();
    if (args.utm_campaign && args.utm_campaign.trim() != '') final_source = args.utm_campaign;
    if (args.source && args.source.trim() != '') final_source = args.source;
    Cookie.write('tracker', final_source, {duration:1});
    this.tracker = final_source;

  // Updates the cookie if another ?source or ?utm_campiagn is set
  updateCookie: function(){
    var final_source = null;
    var args = $get();
    if (args.utm_campaign && args.utm_campaign.trim() != '') final_source = args.utm_campaign;
    if (args.source && args.source.trim() != '') final_source = args.source;
    if (final_source){
      Cookie.write('tracker', final_source, {duration:1});
      this.tracker = final_source;

  markLinks: function(){
      el.href += "?source=" + this.tracker;
    }, this);

function $get(key,url){  
   if(arguments.length < 2) url =location.href;  
   if(arguments.length > 0 && key != ""){  
       if(key == "#"){  
            var regex = new RegExp("[#]([^$]*)");  
        } else if(key == "?"){  
            var regex = new RegExp("[?]([^#$]*)");  
        } else {  
            var regex = new RegExp("[?&]"+key+"=([^&#]*)");  
        var results = regex.exec(url);  
        return (results == null )? "" : results[1];  
    } else {  
        url = url.split("?");  
        var results = {};  
            if(url.length > 1){  
                url = url[1].split("#");  
                if(url.length > 1) results["hash"] = url[1];  
                    item = item.split("=");  
                    results[item[0]] = item[1];  
        return results;  

The way this works is the following:

  1. If someone comes with ?source=something or ?utm_campain=something (a Google Analytics keyword), it stores that value in a cookie called ‘tracker’
  2. If no ?source or ?utm_campaign can be found, it stores the referrer
  3. If no referrer can be found, it stores the value ‘direct’
  4. Every URL that has the class signup-link gets ?source=trackerhere appended, so that the referral gets tracked over to our setup domain.
  5. If it finds a field with the id of site_referral (the rails default for Site#referral field), it sets that value to whatever is stored in the tracker cookie.

Now when people sign up, I can see where they came from in the admin panel:

Referral Screen

Best Webhosting Services

One of the major problems to open a website is decide when webhost do you go for? I listed down a the Webhosting services depending on the Rate, Windows or Linux Hosting, Band Width etc.

I would also mention some of the providers who are giving free service in my next post, but the service is not for professional web hosting. Please find the list below.

Cheap Linux Based Hosting
Host Price($)

/ Month

WebSpace WebTraffic Emails Domains


Technologies Databases

Professional Hosting from Just Host

Visit Site
$3.95 Unlimited Unlimited Unlimited One Free for Life, Point Unlimited Domains PHP,CGI, PERL etc Unlimited MySQL

PowWeb Hosting - Only $3.88 per month!

Visit Site
$3.88 Unlimited Unlimited Unlimited One, Point Unlimited Domains and Subdomains PHP, CGI, PERL etc 75 MySQL DB


Visit Site
$3.95 Unlimited Unlimited 2500 One,Point Unlimited Domains PHP,CGI,PERL 50 MySQL

FatCow Web Hosting: $88 Plan

Visit Site
$4.83 Unlimited Unlimited Unlimited One, Point Unlimited Domains and Subdomains PHP, CGI, PERL etc MySQL DB

Visit Site
$4.95 Unlimited Unlimited Unlimited Free Domain for Life PHP,CGI,PERL MYSQL

Although the above mentioned hosting services are the cheapest i found, i have no experience with hosting with the above. Below mentioned are site are costlier than then above sites , i have experience with hosting in a few of those sites and was happy with those services.

(continue reading…)

  • Copyright © 1996-2010 BlogmyQuery - BMQ. All rights reserved.
    iDream theme by Templates Next | Powered by WordPress