Skip to content

Crazy blog

No money, no honey, no funny…

Monthly Archives: December 2010

If you need to download many files on some host like MegaUpload, RapidShare, you could download JDownloader , it is a free tools and very useful.

You just copy all the links to download and paste to this program, then, it will automatic download and put capcha for you 🙂

If the file require password to extract, it also ask you to input, and extract automatic for you.

I think this software is useful, and you just relax and wait the JDownloader finish for you. Hehe. 🙂


Automation programs are designed with users’ ease of use in mind, and hence the more simple they are, and the more they rely on graphics rather than commands, the better they are for a common user. vTask Studio is such an application that can automate most of the commonly performed tasks, and execute even some high-end commands as well to bring you automation in its real sense. What’s more, the program relies on a completely graphical user interface, hence no programming needed at all.


vTask Studio can handle all common tasks such as launching applications, mouse actions, loops etc, and even can handle advanced automation functions like checkpoints and database queries. However, what really makes this application different from other similar programs is features like image matching and integrated EXE compiler, which allow visual detection of images and automated compilation of executable files, respectively.

vtask studio

Some salient features of vTask Studio include:

XML File Format – allowing the program to use text-based XML files for its operations. Not only these can be easily modified with any text editor, they also provide transparency, i.e., you can view what instructions are being executed without looking for any specialized software. Plus, XMLs are easy to query.

EXE Complier – you can use any of your automation scripts and compile it as a Windows executable standalone program. This makes it possible to distribute your automation scripts to non-users of vTask Studio, all with no strings attached.

Image Recognition – vTask Studio contains advanced image matching and recognition algorithms allowing visual detection of on-screen images. This makes creating automation scripts for internet based activities even easier, since the program will recognize based on what is being displayed.

The program is mostly reliant on drag & drop actions, but a lot of keyboard shortcuts are also present to give you freedom to choose. Options menu contains tweaks for run time, key mapping, main grid view, colors and theming, logs and files, and some miscellaneous options.


vTask Studio is available for all versions of Windows, from 95 to Windows 7. It works with both x86 and x64 architectures.

Download vTask Studio


Editing documents’ styles written in markup language (HTML/XHTML, etc) is easy, once you get the hang of CSS language syntax. Even though understanding classes defined in CSS (Cascading Style Sheet) is not difficult, still many users find it challenging to edit document styles and formatting defined in CSS file. Simple CSS is out to simplify the creation and modification of external CSS files. The application is written for both novices & advance users, since it display all the CSS classes in a sequence, web developers would find it useful in quickly locating the class to edit its attributes whereas beginners wont need to learn language syntax, as it offers pull-down menus for each type of style to define the respective attribute value.


With Simple CSS, you can either start off with a new project to create a CSS file right from the scratch or import an existing CSS file to change the styles and formatting. However, the real usage can be observed when you need to edit previously written CSS file – just import a CSS file, and start editing the CSS classes in the same order as they are called in respective HTML file.

The left sidebar contains all the classes of the CSS file, the main window contains extensive options to customize Text, Display, Borders, Dimensions defined in selected class. There are many pull-down menus available to let you easily change the attributes and to enter new values.

Simple CSS

To create a new CSS file, click New Project, enter appropriate name to start working on new CSS file. The application also enables user to create a duplicate file on the fly to keep original file safe. Once you’ve edited the CSS file, just click Export to save the file.

The application worked successfully while testing on our Windows 7 x86 system. It supports Windows XP/Vista/7 and Mac.

Download Simple CSS


Copy the URL of the file from Rapidshare.

rapidshare file link

Now paste this URL in the under Links and click Add Link. Once all URLs have been added, click Start Download to begin downloading your files.

rapidshare auto downloader main window

You can select to Shut Down the computer once all downloads have completed. You can also change your download destination and schedule downloads. Another great feature is to scan your downloaded files with any anti-virus program.

rapidshare download complete message

Features include:

  • Download a group of links from rapidshare (one by one).
  • Download incomplete downloads until all links are downloaded.
  • Autoshutdown feature
  • Load and save the download list.
  • Check for new versions automatically.


Hehehe, this is the first time I apply SEO for my wordpress website, and it gave me the successful after 3 months.

Here is the result, it not much but it made me happy 🙂

If you have a website or blog, or if you work with anything related to the Internet, you’ll certainly need to know a bit about search engine optimization (SEO). A good way to get started is to familiarize yourself with the most common terms of the trade, and below you’ll find 20 of them. (For those who already know SEO, consider this post as a refresher!).

1. SEM: Stands for Search Engine Marketing, and as the name implies it involves marketing services or products via search engines. SEM is divided into two main pillars: SEO and PPC. SEO stands for Search Engine Optimization, and it is the practice of optimizing websites to make their pages appear in the organic search results. PPC stands for Pay-Per-Click, and it is the practice of purchasing clicks from search engines. The clicks come from sponsored listings in the search results.

2. Backlink: Also called inlink or simply link, it is an hyperlink on another website pointing back to your own website. Backlinks are important for SEO because they affect directly the PageRank of any web page, influencing its search rankings.

3. PageRank: PageRank is an algorithm that Google uses to estimate the relative important of pages around the web. The basic idea behind the algorithm is the fact that a link from page A to page B can be seen as a vote of trust from page A to page B. The higher the number of links (weighted to their value) to a page, therefore, the higher the probability that such page is important.

4. Linkbait: A linkbait is a piece of web content published on a website or blog with the goal of attracting as many backlinks as possible (in order to improve one’s search rankings). Usually it’s a written piece, but it can also be a video, a picture, a quiz or anything else. A classic example of linkbait are the “Top 10″ lists that tend to become popular on social bookmarking sites.

5. Link farm. A link farm is a group of websites where every website links to every other website, with the purpose of artificially increasing the PageRank of all the sites in the farm. This practice was effective in the early days of search engines, but today they are seeing as a spamming technique (and thus can get you penalized).

6. Anchor text: The anchor text of a backlink is the text that is clickable on the web page. Having keyword rich anchor texts help with SEO because Google will associate these keywords with the content of your website. If you have a weight loss blog, for instance, it would help your search rankings if some of your backlinks had “weight loss” as their anchor texts.

7. NoFollow: The nofollow is a link attribute used by website owners to signal to Google that they don’t endorse the website they are linking to. This can happen either when the link is created by the users themselves (e.g., blog comments), or when the link was paid for (e.g., sponsors and advertisers). When Google sees the nofollow attribute it will basically not count that link for the PageRank and search algorithms.

8. Link Sculpting: By using the nofollow attribute strategically webmasters were able to channel the flow of PageRank within their websites, thus increasing the search rankings of desired pages. This practice is no longer effective as Google recently change how it handles the nofollow attribute.

9. Title Tag: The title tag is literally the title of a web page, and it’s one of the most important factors inside Google’s search algorithm. Ideally your title tag should be unique and contain the main keywords of your page. You can see the title tag of any web page on top of the browser while navigating it.

10. Meta Tags: Like the title tag, meta tags are used to give search engines more information regarding the content of your pages. The meta tags are placed inside the HEAD section of your HTML code, and thus are not visible to human visitors.

11. Search Algorithm: Google’s search algorithm is used to find the most relevant web pages for any search query. The algorithm considers over 200 factors (according to Google itself), including the PageRank value, the title tag, the meta tags, the content of the website, the age of the domain and so on.

12. SERP: Stands for Search Engine Results Page. It’s basically the page you’ll get when you search for a specific keyword on Google or on other search engines. The amount of search traffic your website will receive depends on the rankings it will have inside the SERPs.

13. Sandbox: Google basically has a separate index, the sandbox, where it places all newly discovered websites. When websites are on the sandbox, they won’t appear in the search results for normal search queries. Once Google verifies that the website is legitimate, it will move it out of the sandbox and into the main index.

14. Keyword Density: To find the keyword density of any particular page you just need to divide the number of times that keyword is used by the total number of words in the page. Keyword density used to be an important SEO factor, as the early algorithms placed a heavy emphasis on it. This is not the case anymore.

15. Keyword Stuffing: Since keyword density was an important factor on the early search algorithms, webmasters started to game the system by artificially inflating the keyword density inside their websites. This is called keyword stuffing. These days this practice won’t help you, and it can also get you penalized.

16. Cloaking. This technique involves making the same web page show different content to search engines and to human visitors. The purpose is to get the page ranked for specific keywords, and then use the incoming traffic to promote unrelated products or services. This practice is considering spamming and can get you penalized (if not banned) on most search engines.

17. Web Crawler: Also called search bot or spider, it’s a computer program that browses the web on behalf of search engines, trying to discover new links and new pages. This is the first step on the indexation process.

18. Duplicate Content: Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar. You should avoid having duplicate content on your website because it can get you penalized.

19. Canonical URL: Canonicalization is a process for converting data that has more than one possible representation into a “standard” canonical representation. A canonical URL, therefore, is the standard URL for accessing a specific page within your website. For instance, the canonical version of your domain might be instead of

20. Robots.txt: This is nothing more than a file, placed in the root of the domain, that is used to inform search bots about the structure of the website. For instance, via the robots.txt file it’s possible to block specific search robots and to restrict the access to specific folders of section inside the website.