නිති විමසන පැන - තාක්ෂණික (Technical FAQ)
This FAQ answers some question related to the technical workings of Wikipedia, including software and hardware.
Note: If you're trying to get help for a specific technical problem that isn't answered by the FAQs, try asking in Wikipedia:Troubleshooting or at the village pump.
- When the second person (and later persons) attempts to save the page, MediaWiki will attempt to merge their changes into the current version of the text. If the merge fails then the user will receive an "edit conflict" message, and the opportunity to merge their changes manually. Multiple consecutive conflicts are noticed, and will generate a slightly different message. This is similar to Concurrent Versions System (CVS), a widely used software version management system.
- If you entered your e-mail address when you signed up, you can have a new password generated. Click on the "Log in" link in the upper-right corner. Enter your user name, and click the button near the bottom of the page called "Mail me a new password". You should receive an e-mail message with a new random password; you can use it to log in, then go ahead and change your password in your preferences to something you'll remember.
- You can change your password via the 'my preferences' page (there's a link to it, normally in the top right corner of the screen); you can also find this preference page at Special:Preferences.
- The developers use MediaZilla to keep track of bugs. For more information, see Bug reports.
- To make an official feature request, use MediaZilla. For information on using MediaZilla, please see Bug reports.
- To discuss a new feature, go to MediaWiki feature request and bug report discussion. But remember, if you don't report your feature request at MediaZilla, it will probably never be implemented!
- Wikipedia originally ran UseModWiki, a general wiki script by Clifford Adams. In January 2002, we switched to a PHP script, which in turn was completely overhauled the following July to create what we now call MediaWiki.
- MySQL is used for the database backend, Apache is the web server, and PowerDNS is used for DNS.
- The Wikipedia servers' operating system is Linux. The most widely used distribution is Fedora. For details see Wikimedia servers.
- See m:Wikimedia servers.
- A brief history of Wikipedia serving:
- Phase I: January 2001 - January 2002
- One of bomis' servers hosted all Wikipedia wikis running on UseModWiki software
- Phase II: January 2002-July 2002
- One of bomis' servers hosted all Wikipedia wikis; English and meta running on the php/mysql-based new software, all other languages on UseModWiki. Runs both the database and the web server on one machine.
- Phase IIIa: July 2002-May 2003
- Wikipedia gets own server, running English Wikipedia and after a bit meta, with rewritten PHP software. Runs both the database and the web server on one machine.
- One of bomis' servers continues to host some of the other languages on UseModWiki, but most of the active ones are gradually moved over to the other server during this period.
- Phase IIIb: May 2003-Feb 2004
- Wikipedia's server is given the code name "pliny". It serves the database for all phase 3 wikis and the web for all but English.
- New server, code name "larousse", serves the web pages for the English Wikipedia only. Plans to move all languages' web serving to this machine are put on hold until load is brought down with more efficient software or larousse is upgraded to be faster.
- One of bomis' servers continued to host some of the other languages on UseModWiki until it died. All are now hosted on pliny; a few more of the active ones have been gradually moved over to the new software, and an eventual complete conversion is planned.
- Phase IIIc: Feb 2004-present
- Wikipedia gets a whole new set of servers, paid for through donations to the non-profit Wikimedia Foundation.
- The new architecture has a new database server (suda), with a set of separate systems running Apache, as well as "squids" that cache results (to reduce the load). More details are at m:Wikimedia servers.
- New servers bought as needed, bringing total number to over 120 servers.
- In Tampa (pmtpa), Wikimedia has one 10 Gbit/s and one 1 Gbit/s connection to its US colocation provider PowerMedium and 1 Gbit/s to Time Warner Telecom. The Amsterdam facility (knams) has a single 700 Mbit/s connection to Kennisnet, and the South Korean facility (yaseo) has a single 1 Gbit/s connection to Yahoo. Combined output is roughly 3.0G bit/s at peak (as of February 2007).
- Early in Wikipedia's history, in August 2003, the database was roughly 16 GB and uploaded images and media files took another gigabyte or so. In February 2003 it was about 4 GB. By April 2004, this had grown to about 57 GB, and was growing at about 1 to 1.4 GB per week, and by October 2004, it had grown to about 170 GB. This includes all languages and support tables but not images and multimedia.
- As of late August 2006, database storage takes about 1.2 terabytes:
- English Wikipedia core database: 163G
- Other Florida-based core databases: 213G
- Other Korea-based core databases: 117G
- Text storage nodes: 44G, 44G, 200G, 149G, 166G, 84G, 84G
- This may include free space inside database storage files, as well as a lot of indexing.
- Uploaded files took up approximately 372 gigabytes as of June 2006, excluding thumbnails.
- Compressed database dumps can be downloaded at http://download.wikipedia.org/.
- Wikipedia uses a very simple markup based on UseModWiki. For more details, see Wikipedia:How to edit a page.
- The short answer is: for simplicity and security.
- And now the longer answer. Wikipedia, and wikis in general, are meant to be edited on the fly. HTML is not easy to use when you simply want to write an article. Creating links gives us a particularly dramatic example. To link to the HTML article using HTML, one would have to type
- <a href="HTML">HTML</a>
- Using Wikipedia markup is much easier:
- [[HTML]]
- Then there's security. Different web browsers have bugs that can be exploited via HTML. Malicious users could also do things like JavaScript popup windows or page redirects if they had full HTML ability on Wikipedia. Several "experimental" sites that allowed full-HTML editing have suffered such attacks, including a couple of other wikis that allowed arbitrary HTML.
- That's not true. Some HTML tags work. Also, HTML table tags were once the only way to create tables (but now it can be done by wiki syntax too). However, there's been some rumbling among the software developers that most HTML tags are deprecated.
- For discussions on wiki syntax for tables, see m:Wiki markup tables and m:MediaWiki User's Guide: Using tables for more recent activity; m:WikiShouldOfferSimplifiedUseOfTables for an old beginning activity.
- Also see Wikipedia:How does one edit a page.
- Wikipedia uses Unicode (specifically the UTF-8 encoding of unicode) and most browsers can handle it but font issues mean that more obscure characters may not work for many users (particularly Internet Explorer users as IE has very poor support for automatically using alternative fonts for obscure characters). Meta:Help:Special characters page for a detailed discussion of what is generally safe and what isn't. This page will be updated over time as more browsers come to support more features.
- See http://www.unicode.org/help/display_problems.html for instructions on how to enable Unicode support for most platforms.
- Just use TeX! See Meta:Help:Formula.
- Yes, the complete text and editing history of all Wikipedia pages can be downloaded. See Wikipedia:database download.
- Note that downloading the database dumps is much preferred over trying to spider the entire site. Spidering the site will take you much longer, and puts a lot of load on the server (especially if you ignore our robots.txt and spider over billions of combinations of diffs and whatnot). Heavy spidering can lead to your spider, or your IP, being barred with prejudice from access to the site. Legitimate spiders (for instance search engine indexers) are encouraged to wait about a minute between requests, follow the robots.txt, and if possible only work during less loaded hours (2:00-14:00 UTC is the lighter half of the day).
- The uploaded images and other media files are not currently bundled in an easily downloadable form; if you need one, please contact the developers on the wikitech-l mailing list. Please do not spider the whole site to get images.
- Ed Summers has written WWW::Wikipedia.
- If you're just after retrieving a topic page, the following Perl sample code works. In this case, it retrieves and lists the Main Page, but modifications to the $url variable for other pages should be obvious enough. Once you've got the page source, Perl regular expressions are your friend in finding wiki links.
#!/usr/bin/perl
use LWP;
$browser = LWP::UserAgent->new();
$url = "http://en.wikipedia.org/wiki/Wikipedia%3AMain_Page";
$webdoc = $browser->request(HTTP::Request->new(GET, $url));
if ($webdoc->is_success) #...then it's loaded the page OK
{
print $webdoc->title, "\n\n"; # page title
print $webdoc->content, "\n\n"; # page text
}
- Note that all (English) Wikipedia topic entries can be accessed using the conventional prefix "
http://en.wikipedia.org/wiki/ ", followed by the topic name (with spaces turned into underscores, and special characters encoded using the standard URL encoding system).
- See also m:Machine-friendly wiki interface.
- Cookies are not required to read or edit Wikipedia, but they are required in order to log in and link your edits to a user account.
- When you log in, the wiki will set a temporary session cookie which identifies your login session; this will be expired when your browser exits (or after an inactivity timeout), and is not saved on your hard drive.
- Another cookie will be saved which lists the user name you last logged in under, to make subsequent logins just a teensy bit easier. (Actually two: one with your name, and one with your account's internal ID number; they must match up.) These cookies expire after 30 days. If this worries you, clear your cookies after completing your session.
- If you check the "remember my password" box on the login form, another cookie will be saved with a hash of your password (not the password itself). As long as this remains valid, you can bypass the login step on subsequent visits to the wiki. The cookie expires after 30 days, or is removed if you log out. If this worries you, don't use the option. (You should probably not use it on a public terminal!)
- This could be a result of your cookie, browser cache, or firewall/Internet security settings. Or, to quote Tim Starling (referring to a question about "remembering password across sessions"):
- "The kind of session isn't a network session strictly speaking, it's an HTTP session, managed by PHP's session handling functions. This kind of session works by setting a cookie, just like the "remember password" feature. The difference is that the session cookie has the "discard" attribute set, which means that it is discarded when you close your browser. This is done to prevent others from using your account after you have left the computer.
- The other difference is that PHP sessions store the user ID and other such information on the server side. Only a "session key" is sent to the user. The remember password feature stores all required authentication information in the cookie itself. On our servers, the session information is stored in the notoriously unreliable memcached system. Session information may occasionally be lost or go missing temporarily, causing users to be logged out. The simplest workaround for this is to use the remember password feature, as long as you are not worried about other people using the same computer." from the Wikipedia:Village pump (technical) on May 4, 2005. (italics added).
- In other words: click the "remember me" box when logging in.
- You can, but depending on your needs you might be better served using something else; MediaWiki is big and complex. See first Wiki software for a list of wiki scripts.
- If after scanning that you're still sure you want to use MediaWiki; see the MediaWiki web site for details on downloading, installing and configuring the software.
- Page hit counting is a feature of the MediaWiki software, but this feature is disabled at the Wikipedia site for performance reasons. Wikipedia is one of the most popular web sites in the world and uses a complex of more than 200 servers to handle the load. Nearly 80% of the load is handled by fewer than 20 front end cache servers which are specialized web servers that are very fast but have very few features. Page access logs are also not maintained at this time due to the sheer number of hits experienced.
- See Troubleshooting - if it's not on there try the village pump. For help with a particular software task see Wikipedia:Computer help desk.
- To view a low-bandwidth Main Page suitable for wireless users, select the Wikipedia:Main Page alternative (simple layout) link. That main page has a link to the text-only version of the main page. For now, direct entry of the URL into your wireless device's browser is the most convenient way to get to the articles. If you know a one-word article, such as Science, you can use that article to gain entry to your favorite topics.
- Also if you log in, then try selecting the chick skin in your preferences to eliminate all the stuff from the edge of the screen, and give you more space to read the articles themselves.
- as in iPod or de Havilland
- If you want the title of an article to display with a lower first case letter, add the text {{lowercase}} to the top of the page.
- No, although it's random enough to provide a small sample of articles reliably.
- We have an index on the page table called page_random, which is a random floating point number uniformly distributed on [0, 1). Special:Random chooses a random double-precision floating-point number, and returns the next article with a page_random value higher than the selected random number. Some articles will have a larger gap before them, in the page_random index space, and so will be more likely to be selected. So the actual probability of any given article being selected is in fact itself random.
- The page_random value for new articles, and the random value used by Special:Random, is selected by reading two 31-bit words from a Mersenne Twister, which is seeded at each request by PHP's initialisation code using a high-resolution timer and the PID. The words are combined using:
- (mt_rand() * $max + mt_rand()) / $max / $max
- Some old articles had their page_random value reset using MySQL's RAND():
- rand_st->seed1=(rand_st->seed1*3+rand_st->seed2) % rand_st->max_value;
- rand_st->seed2=(rand_st->seed1+rand_st->seed2+33) % rand_st->max_value;
- return (((double) rand_st->seed1)/rand_st->max_value_dbl);
|