MetaCentrum project, an activity of the CESNET association, operates and manages distributed computing infrastructure consisting of computing and storage resources owned by CESNET as well as those of co-operative academic centers within the Czech Republic. MetaCentrum is responsible for building the National Grid and its integration to related international activities, especially in the European Union. It is actively involved in many international Grid projects such as EGI-Engage, EGI Federated Cloud, ELIXIR-Excelerate, INDIGO DataCloud, AARC.
Long-term goal of the MetaCentrum project is operation and coordination of distributed computing and data storage infrastructure accompanied by an appropriate support environment and continual expansion of available computational capacities. The main aim of the project is constitution of a virtual computer that allows effective utilization of installed facilities in the frame of supercomputing project and solving tasks whose memory and/or CPU requirements exceeds possibility of individual single supercomputing centers. MetaCentrum structure is flexible enough, any academic subject within the Czech Republic is able to fully integrate any computing capacities in current MetaCentrum infrastructure getting significant higher computing power for research. MetaCentrum's users have, despite differences in the producers of hardware, operating systems and the physical location of computers, common authentication trust domain, this means one login and one password access to all machines involved in MetaCentrum.
Establish fully-fledged National Grid Initiative (NGI) in the Czech Republic with connection to the international environment. MetaCentrum officially represent the interests of the national Grid community towards other national and international bodies. Ultimately, MetaCentrum aims to provide computational power that would not have been possible without a Grid infrastructure.
Cooperate closely with international projects, in particular in the field of Grid infrastructure, for example EGEE III, EuAsiaGrid, or EGI_DS in which the CESNET association is the coordinating parter.
Support the research projects of the European Research Area (ERA) and other research projects in many research disciplines, to enable them to easily share a range of national resources: compute, storage, data, instruments, and ease their efforts to attain a global dimension. To provide coherent electronic access for researchers to all computational and data based resources and facilities required to carry out their research, independent of resource or researcher location.
Simultaneously MetaCentrum activity deals with necessary research and development to ensure optimal functionality, security and performance of the infrastructure. The concrete activities for current year include virtualization of physical infrastructure - computing resources, storage capacity and computer networks; broadening of computational and disc capacity of distributed PC cluster; development of MetaCentrum infrastructure; continued integration into international GRIDs projects; providing of large informational services including dynamical ones through MetaCentrum portal; other development of authorization infrastructure, middleware, etc.
For more information about MetaCentrum activities please visit annual reports of the research project.
- MetaCentrum VO (virtuální organizace MetaCentrum) - operates and manages computing capacities in current MetaCentrum infrastructure (AV, JČU, MU, MZLU, UK, VUT, ZČU).
- Other resources (FZÚ AV, CESNET) are grid connected to EGI through AUGER, VOCE virtual organisations, …
- Czech NGI coordinate access and interconnection of new formed computing infrastructure projects (např. IT4Innovations, CERIT-SC, ...).
MetaCentrum has been established in 1996, since 1999 the project MetaCentrum became one of strategic projects of CESNET.
MetaCentrum was founded as a response to a particular situation in the academic community of the Czech Republic at the beginning of the year 1996. When, in 1994 under the auspices of the Universities' Development Fund of the Ministry of Education of the Czech Republic within its pilot project, first high performance computing centers were to be established, solution called fragmentation by many of its opponents had won. It was decided to support three and later five high performance centers at three (five) different universities of Czech Republic, instead of buying and installing just one large(r) high performance supercomputer.
Many good reasons supported this decision, the most important are listed bellow:
- While the raw computing power of individual computers can be easily shared through the computer network, this is not so easy for the knowledge how to efficiently use them. One center would without doubt concentrate the knowledge to one place (this is generally not bad), but in the same time it would reduce the chances of individual end users to understand the installed hardware and software in the direct proportion to their physical distance to the center.
- One center means there is no competition. End users are not able to compare services and the center has no partner to discuss new development plans. Together this means diminishing interest in external users (center is gradually able to keep itself busy), gradual stagnation, and loss of motivation -- a sole center will either got all his funding or the supercomputing facilities will cease to exist within the Czech Republic.
- Supercomputing covers very broad and distinct scientific areas and nowhere in Czech Republic scientists from at least the major of these fields are concentrated at one place (within just one institution). Computers installed at different places will open a path to specialization, both in hardware (different computer architectures), development environments and especially application software (computational chemistry and physics, technical areas, like mechanics, fluid dynamics, symbolic and general numerical computational systems, and many others).
- "Natural" place for installation of the one supercomputer would be Prague, but this was strongly opposed by representatives from other country regions. High concentration of universities and other scientific and research facilities in Prague unfortunately leads to some belittling of out of Prague institutions and their requirements (even worse, it is easy to rationalize this belittling, at least to some extent); also, the support for remote (far away) access is reduced -- everything is near in Prague when compared to the distance to other university and academic cities within Czech Republic.
- Budget for running costs was not available within the original pilot project and no one university within Czech Republic was able to finance really large supercomputing center on its own. And last but not least, the large universities themselves took the project as a long awaited opportunity to get some funding for medium and high performance computing facilities.
All this lead to the decision to buy three POWER Challenge computers from SGI and to install them at Institute of Computer Science of Charles University in Prague (ICS CU), in Center on computational and information services of Technical University in Brno (CCIS TU) and at Institute of Computer Science of Masaryk University in Brno (ICS MU). At the same time two additional places were selected to have new hardware installed in the following year (1995): the West Bohemia University in Pilsen (WBU) and Computer Center of the Czech Technical University in Prague (CC CTU).
At the beginning of 1996 five high performance computers from SGI, Digital (Pilsen) and IBM (Czech Technical University) were available at the above mentioned five Czech universities. Powerful scalar processors were used in all these computers (none used a vector processor) and they covered both main architectural designs: SGI and Digital systems were representatives of SMP (Symmetric Multiprocessing), this means that just a single copy of operating system runs on all processors and the memory is hardware shared among them, while the IBM SP2 computer with a typical representative of DM (Distributed Memory) system, i.e. a system where each processor runs its own copy of operating system and has its own memory (the total computer memory is thus distributed among the individual processors). The major application software packages differed, too: computational chemistry and physics got high priority at ICS MU and ICS CU, with some presence at CC CTU, while technically oriented packages were available at WBU, CC CTU, TU, and program Fluent was also available at ICS CU. Access to the Matlab system was available primarily at CTU, WBU and MU, and other differences existed, too. On the other hand, all these computers were made accessible to all users from the academic community of the Czech Republic, i.e. to all professors and scientists at universities and at Academy of Science and to large extent to all university students (not only to the mentioned five universities).
In 1996 Ministry of Education launched a new program called TEN-34 CZ to support high performance networks and their applications. This program opened the opportunity to further qualitative development of the high performance centers, founded and developed
in previous years. A project MetaCenter was approved and financed, with then goal to create a computational Grid whose nodes will be the above mentioned computers.
Since 1999 the project MetaCentrum became one of strategic projects of CESNET institute - this incorporation anticipated large expansion of GRIDs and their mix with activities of organizations taking care of high-speed network infrastructure. GRIDS are primary source of applications requiring extremely fast networks with low latency, covering whole continents. The basic aims of MetaCentrum remained the same. Moreover the integration and close collaboration of individual nodes was deepened. Two nodes from the original five "founding nodes" of MetaCentrum backed away form the project: CC CTU Prague with IBM SP system that was technologically and conceptually different from the rest of MetaCentrum system (clusters had not been used as high performance computational systems at that time) and that did not need integration with other systems so much; and CCIS TU Brno where chiefs stopped support of similar activities. On the other hand VŠB-Technical Univerzity of Ostrava became a new member of MetaCentrum. This allows integration of new architecture - computer IBM SP2 - enabling to test new approaches and methods of new node integration into the MetaCentrum environment. Moreover the node at VŠB-Technical Univerzity of Ostrava is the only node of MetaCentrum whose capacities are "hidden" by the local firewall. This allows to verify the operability of integrated MetaCentrum environment even under these difficult conditions. In 1999 we also finished one phase of construction of the MetaCentrum security infrastructure by change to Kerberos 5 in heimdal implementation (we actively participate in its development). Simultaneously the question of data backup in MetaCentrum was solved by and purchasing and consequently running of tape robot.
During years 2000 and 2001 there was a significant shift in MetaCentrum technical facilities. It was a result of restricted financial sources (except ICS CU there were no important investments into computational resources) combined with permanently continued tendency towards so-called "commodity" solution even in the supercomputers area - here as orientation to clusters of computers, especially with IA32 CPU architecture and operating system Linux. Therefore the main investment was a cluster with 128 Pentium III CPUs (the first half with 700 MHz frequency, the second half with 1 GHz frequency) and 64GB of internal memory. In accordance with the basic idea of MetaCentrum not to build one node but utilize the possibilities of high-speed networks at the maximum, the cluster is distributed among Pilsen (32 CPUs), Prague (32 CPUs) and Brno (64 CPUs). Moreover, the National Centre for Biomolecular Research acquired (based on highly positive experience with MetaCentrum cluster) its own 32 CPUs cluster fully compatible with MetaCentrum systems. There is also substantial effort in Pilsen to obtain similar system that would be partially integrated into MetaCentrum. Apart from the increase of computational power, the work in 2000 and 2001 was focused especially to security area, full integration of clusters (including development of necessary software), establishment of MetaCentrum portal as the unified informational gateway for all users and administrators of MetaCentrum, development of system Perun for administration of user accounts and progressive change to new batch queueying system PBS.
Also the research activities of MetaCentrum started to be significantat the international level. The MetaCentrum employees actively participated at the international 5th EU framework programme project DataGrid. This currently allowed us to follow our DataGrid experience by our participation in other international projects.
The CESNET, association of legal entities, was held in 1996 by all universities of the Czech Republic and the Czech Academy of Sciences. Its primary goal is to operate and develop the academic backbone network of the Czech Republic. The current generation of this network is called CESNET2 and it offers tens of gigabytes per second bandwidth. The main objectives of the CESNET association are the following: operation and development of high-speed national computer network for science, research and educational purposes; research and development of advanced network technologies and their applications; public dissemination of the information base in the area of modern network technologies; and operating the national grid infrastructure. CESNET is long-term Czech academic network provider and participant in relevant international projects. The most important international activity include participation on Dante project, membership in the TERENA association, international partnership in the Internet2 consortium, participation in European projects GN2 (GEANT previous projects, QUANTUM/TEN-155, TEN-34), EGEE III (previous projects DataGrid, EGEE, EGEE II), EGI_DS, and participation in many others international projects.