D7net Mini Sh3LL v1

 
OFF  |  cURL : OFF  |  WGET : ON  |  Perl : ON  |  Python : OFF
Directory (0775) :  /var/www/html/hpsclab/img/../

 Home   ☍ Command   ☍ Upload File   ☍Info Server   ☍ Buat File   ☍ Mass deface   ☍ Jumping   ☍ Config   ☍ Symlink   ☍ About 

Current File : /var/www/html/hpsclab/img/../index.xml
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>HPSC Smart Lab</title>
    <link>/</link>
      <atom:link href="/index.xml" rel="self" type="application/rss+xml" />
    <description>HPSC Smart Lab</description>
    <generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language>
    
    
    <item>
      <title>Cluster HPC Bluejeans</title>
      <link>/hardware/bluejeans/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/hardware/bluejeans/</guid>
      <description>&lt;p&gt;Bluejeans Hardware Features&lt;br /&gt;
Beowulf is a multi-computer architecture which can be used for parallel and distributed computations.&lt;br /&gt;
Bluejeans (Bj) is a Beowulf HPC cluster based of DSA-LabMNCP, composed by 36 workingnodes and 4 servicenode connected together via switch Ethernet 1000 Mb/s dedicated.&lt;br /&gt;
Workingnode features:&lt;br /&gt;
n.1 CPU Intel Dual Core 2,6 GHz;&lt;br /&gt;
n.1 1 GB RAM;&lt;br /&gt;
n.1 Ethernet connection 1000 Mb/s.&lt;br /&gt;
Servicenodes Features:&lt;br /&gt;
n.1 CPU Intel Dual Core 2,6 GHz;&lt;br /&gt;
n.2 1 GB RAM;&lt;br /&gt;
n.2 Ethernet connection 1000 Mb/s.&lt;/p&gt;

&lt;p&gt;Bj: Front view Bj: side view&lt;/p&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src=&#34;/img/bjfront.jpg&#34; alt=&#34;bjfront&#34; /&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src=&#34;/img/hpc.jpg&#34; alt=&#34;bjside&#34; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The servicenodes has been configurate for export the followed services:&lt;br /&gt;
User Authentication;&lt;br /&gt;
NFS Server;&lt;br /&gt;
Data Storage and Data Backup;&lt;br /&gt;
SSH login server;&lt;br /&gt;
Followed you can see the shema hardware of the DSA-LabMNCP/sHPC-Bluejeans:&lt;/p&gt;

&lt;p&gt;&lt;div style=&#34;text-align: center;&#34;&gt; &lt;img src=&#34;/img/Bj-Arch.jpg&#34; alt=&#34;bjArch&#34; /&gt;
&lt;div style=&#34;text-align: left;&#34;&gt;
In total the Bj is composed by 40 machines with 80 core available for parallel.&lt;br /&gt;
and distributed computation.&lt;br /&gt;
The data storage server export to the servicenodes 8 hard disk 2 Tera (mirrored) dedicated for output simulation storage and data backup.&lt;/p&gt;

&lt;p&gt;&lt;div style=&#34;text-align: center;&#34;&gt; &lt;img src=&#34;/img/bj-2.jpg&#34; alt=&#34;bj&#34; /&gt;
&lt;div style=&#34;text-align: left;&#34;&gt;
The goal of DSA Bluejeans cluster is provide computational resource and distributed environment at the DSA research activities and the DSA-LabMNCP Team can run their batch jobs and distributed compute under the resource manager Torque (PBS).The Torque scheduler provide the followed features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fault Tolerance Additional failure conditions checked/handled Node health check script support;&lt;/li&gt;
&lt;li&gt;Scheduling Interface;&lt;/li&gt;
&lt;li&gt;Scalability;&lt;/li&gt;
&lt;li&gt;Usability;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, you can see the Bj runtime today performance at this link.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Genesis GE-i940 Tesla</title>
      <link>/hardware/genesis/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/hardware/genesis/</guid>
      <description>&lt;p&gt;On September 28, 2009, a workstation Genesis GE-i940 Tesl, based on both GPGPU* and nVidia/CUDA** Technologies has been installed at DSA/LabMNCP.&lt;/p&gt;

&lt;p&gt;It is a testbed for developing advanced simulation in the following research field:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Stochastic simulation;&lt;/li&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Molecular Dynamics;&lt;/li&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Atmospheric and climate modeling;&lt;/li&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Weather forecast investigation;&lt;/li&gt;
&lt;li&gt;&lt;span style=&#34;color: blue;&#34;&gt;Grid/Cloud Hybrid Virtualization;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;*&lt;/span&gt;&lt;br /&gt;
“GPGPU stands for General-Purpose computation on Graphics Processing Units, also known as GPU Computing. Graphics Processing Units (GPUs) are high-performance many-core processors capable of very high computation and data throughput. See more &lt;a href=&#34;https://www.ibiblio.org/&#34; target=&#34;_blank&#34;&gt;here&lt;/a&gt;.”&lt;/p&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;**&lt;/span&gt;&lt;br /&gt;
“NVIDIA® CUDA™ is a general purpose parallel computing architecture that leverages the parallel compute engine in NVIDIA graphics processing units (GPUs) to solve many complex computational problems in a fraction of the time required on a CPU. See more &lt;a href=&#34;https://developer.nvidia.com/about-cuda&#34; target=&#34;_blank&#34;&gt;here&lt;/a&gt;. “&lt;/p&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;span style=&#34;font-size: 18px; color: blue;&#34;&gt;Hardware&lt;/span&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src=&#34;/img/ge.jpg&#34; alt=&#34;ge-image&#34; /&gt;&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Mainboard&lt;/td&gt;
&lt;td&gt;Asus x58/ICH10R 3 PCI-Express x16, 6 SAT, 2 SAS, 3+6 USB&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;i7-940 2,93 133 GHz fsb, Quad Core 8 Mb cache&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;6 x 2Gb DRR 3 1333 DIM&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Hard Disk&lt;/td&gt;
&lt;td&gt;2 x 500 Gb SATA 16Mb cache 7.200 RPM&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;GPU&lt;/td&gt;
&lt;td&gt;1 Quadro FX5800 4Gb RAM&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;2 x Tesla C1060 4 Gb RAM&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;span style=&#34;font-size: 18px; color: blue;&#34;&gt;Software&lt;/span&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OS:&lt;/td&gt;
&lt;td&gt;&lt;a href=&#34;https://www.centos.org/&#34; target=&#34;_blank&#34;&gt;GNU/Linux CentOs 5.3 64 Bit&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;Driver:&lt;/td&gt;
&lt;td&gt;&lt;a href=&#34;https://www.nvidia.com/object/thankyou_linux.html?url=/compute/cuda/2_1/drivers/NVIDIA-Linux-x86_64-180.22-pkg2.run&#34; target=&#34;_blank&#34;&gt;nVidia Cuda 180.22 Linux 64bit&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;VMware:&lt;/td&gt;
&lt;td&gt;&lt;a href=&#34;https://www.vmware.com/&#34; target=&#34;_blank&#34;&gt;VMware-server-2.0.2&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;span style=&#34;font-size: 18px; color: blue;&#34;&gt;OUTPUT of First Test:&lt;/span&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;

&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Serial simulation(ms)&lt;/td&gt;
&lt;td&gt;GPU(ms)&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for malloc&lt;/td&gt;
&lt;td&gt;0.02&lt;/td&gt;
&lt;td&gt;175.21 ms&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for RndGnr&lt;/td&gt;
&lt;td&gt;51430.92&lt;/td&gt;
&lt;td&gt;2283.19&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for init&lt;/td&gt;
&lt;td&gt;275.48&lt;/td&gt;
&lt;td&gt;0.31&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for computing&lt;/td&gt;
&lt;td&gt;391391.12&lt;/td&gt;
&lt;td&gt;329.19 ms&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for I/O&lt;/td&gt;
&lt;td&gt;56822.77&lt;/td&gt;
&lt;td&gt;64740.54 ms&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;execution time for GPU/CPU&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;198.43 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;span style=&#34;font-size: 18px; color: blue;&#34;&gt;Output using GPU,&lt;/span&gt;
&lt;pre&gt;
device 0           : Quadro FX 5800
device 1           : Tesla C1060
device 2           : Tesla C1060&lt;/p&gt;

&lt;p&gt;Selected device: 2 &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&lt;/p&gt;

&lt;p&gt;device 2           : Tesla C1060
major/minor        : 1.3 compute capability
Total global mem   : -262144 bytes
Shared block mem   : 16384 bytes
RegsPerBlock       : 16384
WarpSize           : 32
MaxThreadsPerBlock : 512
TotalConstMem      : 65536 bytes
ClockRate          : 1296000 (kHz)
deviceOverlap      : 1
deviceOverlap      : 1
MultiProcessorCount: 30&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;Using 1048576 particles
100 time steps&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;/pre&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>GreenJeans</title>
      <link>/hardware/greenjeans/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/hardware/greenjeans/</guid>
      <description>&lt;p&gt;GreenJeans is new experimental HPC Cluster/Beowulf of DSA build up with the aim to create both economy and enviroment sustainable solution for the Scientific HPC field.&lt;/p&gt;

&lt;p&gt;&lt;div style=&#34;text-align: center;&#34;&gt; &lt;img src=&#34;/img/logogreen.png&#34; alt=&#34;logogreen&#34; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;GreenJeans Making of&amp;hellip;&lt;/span&gt;&lt;br /&gt;
&lt;img src=&#34;/img/gj.gif&#34; alt=&#34;gj&#34; /&gt;
&lt;div style=&#34;text-align:left&#34;&gt;
Over the Green we have installed the followed software:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://developer.nvidia.com/cuda-downloads&#34; target=&#34;_blank&#34;&gt;CUDA&lt;/a&gt;(Driver / Toolkit / SDK)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://www.oracle.com/technetwork/java/javase/downloads/index.html&#34; target=&#34;_blank&#34;&gt;SDK Java Sun&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;MPICH4 V1&lt;/li&gt;
&lt;li&gt;MPICH4 V2&lt;/li&gt;
&lt;li&gt;MPI2-VMI&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://developer.nvidia.com/cuda-downloads&#34; target=&#34;_blank&#34;&gt;Eucalyptus&lt;/a&gt;(KVM/QEMU Hypervisor)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://greenjeans.uniparthenope.it/ganglia&#34; target=&#34;_blank&#34;&gt;Ganglia&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.clusterresources.com/products/torque-resource-manager.php&#34; target=&#34;_blank&#34;&gt;Torque&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every work node of GreenJeans have installed nVidia GeForce GTX Ti&lt;br /&gt;
&lt;pre&gt;
Device 0:                                      “GeForce GTX 560 Ti”&lt;br /&gt;
CUDA Driver Version:                           4.0&lt;br /&gt;
CUDA Runtime Version:                          4.0&lt;br /&gt;
CUDA Capability Major/Minor version number:    2.1&lt;br /&gt;
Total amount of global memory:                 1072889856 bytes&lt;br /&gt;
Multiprocessors x Cores/MP = Cores:            8 (MP) x 48 (Cores/MP) = 384 (Cores)&lt;br /&gt;
Total amount of constant memory:               65536 bytes&lt;br /&gt;
Total amount of shared memory per block:       49152 bytes&lt;br /&gt;
Total number of registers available per block: 32768&lt;br /&gt;
Warp size:                                     32&lt;br /&gt;
Maximum number of threads per block:           1024&lt;br /&gt;
Maximum sizes of each dimension of a block:    1024 x 1024 x 64&lt;br /&gt;
Maximum sizes of each dimension of a grid:     65535 x 65535 x 65535&lt;br /&gt;
Maximum memory pitch:                          2147483647 bytes&lt;br /&gt;
Texture alignment:                             512 bytes&lt;br /&gt;
Clock rate:                                    1.64 GHz&lt;br /&gt;
Concurrent copy and execution:                 Yes&lt;br /&gt;
Run time limit on kernels:                     No&lt;br /&gt;
Integrated:                                    No&lt;br /&gt;
Support host page-locked memory mapping:       Yes&lt;br /&gt;
Compute mode:                         Default (multiple host threads can use this device simultaneously)&lt;br /&gt;
Concurrent kernel execution:                   Yes&lt;br /&gt;
Device has ECC support enabled:                No&lt;br /&gt;
Device is using TCC driver mode:               No&lt;br /&gt;
&lt;/pre&gt;
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce GTX 560 Ti&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>jGrADS: Java Wrap sull’analisi di rete e del sistema di visualizzazione (GrADS) </title>
      <link>/download/jgrads/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/download/jgrads/</guid>
      <description>&lt;p&gt;The Grid Analysis and Display System (GrADS) is an interactive desktop tool that is used for easy access, manipulation, and visualization of earth science data. The format of the data may be either binary, GRIB, NetCDF, or HDF-SDS (Scientific Data Sets). GrADS has been implemented worldwide on a variety of commonly used operating systems and is freely distributed over the Internet. For more information, follow the link to the GrADS website: &lt;a href=&#34;http://www.iges.org/grads/&#34; target=&#34;_blank&#34;&gt;http://www.iges.org/grads/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;GrADS is widely used by the computational environmental scientist community thanks to the scripting language support (&lt;a href=&#34;http://www.iges.org/grads/gadoc/users.html&#34; target=&#34;_blank&#34;&gt;GrADS script&lt;/a&gt;) and to the external plugin feature (&lt;a href=&#34;http://opengrads.org/&#34; target=&#34;_blank&#34;&gt;Open GrADS&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The DSA-LMNCP contribution to the GrADS world is a Java wrap enabling Java applications to use GrADS as a back end data analysis and rendering.&lt;/p&gt;

&lt;p&gt;The project is really a “work in progress” and open to any kind of external contribution.&lt;/p&gt;

&lt;p&gt;The snapshot archive is downloadable here.&lt;/p&gt;

&lt;p&gt;&lt;a href=&#34;/zip/035_jgrads.zip&#34;&gt;Downlad Jgrads&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Marzie Raei</title>
      <link>/staff/marzie-raei/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/staff/marzie-raei/</guid>
      <description>&lt;p&gt;Ph.D. Student in Applied Mathematics, Malek Ashtar University of Technology, Isfahan, Iran.&lt;/p&gt;

&lt;p&gt;The main topic of my research is meshless methods based on radial basis functions. My goal is the modification, development and accelerates the meshless methods by some parallel procedure and fast algorithms.&lt;/p&gt;

&lt;p&gt;&lt;span style=&#34;font-size: 22px; color: blue;&#34;&gt; &lt;strong&gt;Research interest:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Numerical Analysis&lt;/li&gt;
&lt;li&gt;Meshless Methods (Local and strong forms)&lt;/li&gt;
&lt;li&gt;Kernel Based Approximation techniques&lt;/li&gt;
&lt;li&gt;Fractional Differential Equations&lt;/li&gt;
&lt;li&gt;Adaptive Computational Techniques&lt;/li&gt;
&lt;li&gt;Fast Numerical Methods&lt;/li&gt;
&lt;li&gt;High-performance scientific computing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;span style=&#34;font-size: 22px; color: blue;&#34;&gt; &lt;strong&gt;Papers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An adaptive sparse meshless technique in greedy algorithm framework to simulate an anomalous mobile-immobile transport model (submitted)&lt;/li&gt;
&lt;li&gt;H. R. Ghehsareh, M. Raei, A. Zaghian, Application of meshless local Petrov- Galerkin technique to simulate two-dimensional time fractional Tricomi-type
problem (Under review)&lt;/li&gt;
&lt;li&gt;H. R. Ghehsareh, M. Raei, A. Zaghian, Numerical simulation of a modified anomalous diffusion process with nonlinear source term by a local weak form meshless method, Engineering Analysis with Boundary Elements, 98 (2019) 64-76.&lt;/li&gt;
&lt;li&gt;H. R. Ghehsareh, A. Zaghian, M. Raei, A local weak form meshless method to simulate a variable order time-fractional mobile–immobile transport model, Engineering Analysis with Boundary Elements, 90 (2018) 63-75.&lt;/li&gt;
&lt;/ul&gt;
</description>
    </item>
    
    <item>
      <title>Servizio Instrument ed astratti strumento quadro</title>
      <link>/download/quadro/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/download/quadro/</guid>
      <description>&lt;div style=&#34;text-align: center; color: blue;&#34;&gt;Software developed:&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;InstrumentService and AVL&lt;/li&gt;
&lt;li&gt;AbstractInstrumentFramework and AVL**&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;** AVL requires the installation of Ascom Platforms for the use of the telescope.&lt;/p&gt;

&lt;p&gt;The goal of this Thesis-project is the creation of a software system for secure sharing and aggregation of data acquisition tools, geographically distributed, for engineering and scientific applications.
To do this, was choosen the technology made available by computational grids “web service-based” using the software Globus-toolkit Toolkit 4″.&lt;/p&gt;

&lt;div style=&#34;text-align: center; color: blue;&#34;&gt;ABSTRACT INSTRUMENT&lt;/div&gt;

&lt;p&gt;The use of grid technology to control instruments for acquisition and retrieve the data implies the need of develop a standard methodology of interface between the different types of hardware.
During the development of the thesis-project has been implemented the framework AbstractInstrument (AIF), used for virtualization of the instruments, through the use of standard interfaces that provide a high level of interation, common to all instruments.
Thanks to this approach, any instrument can be hadled through a device driver of hight level.&lt;/p&gt;

&lt;div style=&#34;text-align: center; color: blue;&#34;&gt;INSTRUMENT SERVICE&lt;/div&gt;

&lt;p&gt;To manage a virtualized instrument through a computational grid, was developed, using the Globus Toolkit version 4 (GT4), developed by the Mathematics and Computer Science Division of the Argonne National Laboratory (MCS / ANL) and the Computation Institute of the European university of Chicago (UOC-CI), scientific institutions of global relevance which are ongoing collaboration, the secure grid-web-service Instrument Service (IS) which, through the functionality offered from the AIF allows access, control and sharing of tools across the virtualized Grid.
The IS can interface any instrument to the grid automatically publishing it on the Index Service, standard component of GT4, the metadata relating to each instrument and eventually the values of the current measure acquired by the sensors.
This feature, fully configured configurable in terms of information published, allows the Resource Broker Service (RBS), a component developed at the Department of Applied Sciences, to search tools as well as other grid resources through a query written
with the description language resource ClassAd, used by Condor and by gLite and considered the de facto standard in this type of applications.&lt;/p&gt;

&lt;div style=&#34;text-align: center; color: blue;&#34;&gt;AVL&lt;/div&gt;

&lt;p&gt;In order to show what is actually possible by using the components developed, was realized a Virtual laboratory dedicated to astronomical applications (AVL). AVL actually supports robotic telescopes and weather stations that can be used in applications of computational grid integrating also other components such as services for the distribution of multidimensional environmental-data.&lt;/p&gt;

&lt;p&gt;&lt;div style=&#34;text-align: center;&#34;&gt;
&lt;a href=&#34;/zip/InstrumentService.zip&#34;&gt;InstrumentService.zip&lt;/a&gt;&lt;br /&gt;
&lt;a href=&#34;/zip/AbstractInstrument.zip&#34;&gt;AbstractInstrument.zip&lt;/a&gt;&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>SQL Handler</title>
      <link>/download/sqlhandler/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/download/sqlhandler/</guid>
      <description>&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;SQLH and Hyrax&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Pre requirements: LIBPQ .&lt;/p&gt;

&lt;p&gt;The software developed is a Hyrax plugin, useful to add SQL Query capabilities to the BES server. It consists in a fully functional SQL handler that you can customize and expand.
In this ALPHA release you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use OLFS to set constraints&lt;/li&gt;
&lt;li&gt;Set complex SQL query into the dataset file (Join, union)&lt;/li&gt;
&lt;li&gt;Set constraints into the dataset&lt;/li&gt;
&lt;li&gt;Set database password access into the dataset file OR using constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;BES Software&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;BES is a high-performance back-end server software framework that allows data providers more flexibility in providing end users views of their data. The current OPeNDAP data objects (DAS, DDS, and DataDDS) are still supported, but now data providers can add new data views, provide new functionality, and new features to their end users through the BES modular design. Providers can add new data handlers, new data objects/views, the ability to define views with constraints and aggregation, the ability to add reporting mechanisms, initialization hooks, and more.&lt;/p&gt;

&lt;p&gt;OPeNDAP provides the tools to build these new modules that can then be dynamically loaded into the BES.&lt;/p&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;Hyrax&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Hyrax is the next generation server from OPeNDAP. It utilizes a modular design that employs a light weight Java servlet (aka OLFS) to provide the public-accessible client interface, and a back-end daemon, the BES to handle the heavy lifting. The BES uses the same handlers that are used with Server3 (also know as the CGI Server) but loads those at run time.&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The servlet architecture is faster, more robust, and secure than CGI invoked Perl scripts.&lt;/li&gt;
&lt;li&gt;A single installation can handle multiple data representations (hdf4, hdf5, netcdf, et c.)&lt;/li&gt;
&lt;li&gt;THREDDS catalog functionality.&lt;/li&gt;
&lt;li&gt;A prototype SOAP interface for OPeNDAP data services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;OLFS: The Hyrax Front End&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;The OPeNDAP Lightweight Frontend Servlet (OLFS) provides the public-accessible client interface for Hyrax. The OLFS communicates with the Back End Server (BES) to provide data and catalog services to clients. The OLFS implements the DAP2 protocol and supports some of the new DAP4 features. We hope that other groups will develop new front end modules that will implement other protocols.&lt;/p&gt;

&lt;p&gt;New Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides THREDDS Catalogs responses&lt;/li&gt;
&lt;li&gt;Prototype SOAP interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;SQLH: SQL Handler&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;It’s an SQL handler used to connect databases to an OpeNDAP Hyrax (bes) server. Written in C++, it uses libpq to query DB. It implements many interfaces useful to give you an easy way to modify and to use it with other ODBC libraries.
It is composed by tree Basic component:&lt;/p&gt;

&lt;p&gt;The &lt;span style=&#34;color: blue;&#34;&gt;SQLTable&lt;/span&gt; class used to load the requested file.
Alternatively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A DAS object&lt;/li&gt;
&lt;li&gt;A DDS object&lt;/li&gt;
&lt;li&gt;A DataDDS object (a flat SQLSequence of strings)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;The SQLFilterC(onstraint)E(xpression):&lt;/span&gt;&lt;br /&gt;
Used to parse the selected dataset and (or) the contraint expressions specified by the user (using the OLFS).
It actually remove ALL the constraint expression from the URL and use it to build a filtered SQL query.
So the filter operation is done by the SQL server and no constraints will be passed to the BES.
You can easely change this behaviour.&lt;/p&gt;

&lt;p&gt;&lt;span style=&#34;color: blue;&#34;&gt;The SQLConnector&lt;/span&gt;&lt;br /&gt;
It’s a component used to manage data transfer from the database (read-only). It’s composed by the following two components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The SQLResultSet
Specifies a common interface used to get values from the accessed database.
Its methods are used in the SQLTable and SQLFilterCE.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;The SQLConnection
Specifies a common interface used to open and close a connection to the accessed database.
Its methods are used in the SQLTable and SQLFilterCE using SQLConnector.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;

&lt;p&gt;&lt;div style=&#34;text-align: center&#34;&gt;&lt;a href=&#34;/zip/SQLHandler.tar.gz&#34;&gt;SQLHandler.tar.gz&lt;/a&gt;
&lt;div style=&#34;text-align: center&#34;&gt;[SQLH-UMLs.zip(/zip/SQLH-UMLs.zip)&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>

AnonSec - 2021 | Recode By D7net