http://www.ga.gov.au/oracle/geomag/agrfform.jsp
Conversion from Magnetic North to True North
Here's the web site to acquire the True North calculation based on the coordinate and date:
Quick recap of single line command to replace strings in heaps of files
I was looking at the fast way to replace strings in all related Python script files recursively at Terminal.
Here's the stand one:
xargs will combine the single line output of find and run commands with multiple
arguments, multiple times if necessary to avoid the max chars per line limit. In this case we combine xargs with sed.
Here's a variation:
This one is a bit different but may be easier to remember. It actually uses find command to output a list of files. With each one of the line, it then substitutes the filename with {} for the command line using sed for further processing which is replacing 'bar' with 'foo'. A character ';' is appended to the end of each line of command.
With this command, it would actually produce something like these:
Here's the stand one:
$ find . -type f -name "*.py" -print | xargs sed -i 's/foo/bar/g'
xargs will combine the single line output of find and run commands with multiple
arguments, multiple times if necessary to avoid the max chars per line limit. In this case we combine xargs with sed.
Here's a variation:
$ find *.py -type f -exec sed -i "s/foo/bar/g" {} \;
This one is a bit different but may be easier to remember. It actually uses find command to output a list of files. With each one of the line, it then substitutes the filename with {} for the command line using sed for further processing which is replacing 'bar' with 'foo'. A character ';' is appended to the end of each line of command.
With this command, it would actually produce something like these:
$ sed -i "s/foo/bar/g" script1_found.py;
$ sed -i "s/foo/bar/g" script2_found.py;
$ sed -i "s/foo/bar/g" script3_found.py;
$ sed -i "s/foo/bar/g" script4_found.py;
$ sed -i "s/foo/bar/g" script5_found.py;
$ sed -i "s/foo/bar/g" script6_found.py;
...
WRF & ARW
What is WRF?
WRF is the short form of Weather Research and Forecasting Model, i.e., a numerical weather prediction system. WRF is a state-of-the-art atmospheric modeling system designed for both meteorological research and numerical weather prediction. It offers a host of options for atmospheric processes and can run on a variety of computing platforms.- Used for both research and operational forecasting
- It is a supported "community model", i.e. a free and shared resource with distributed development and centralized support
- Its development is led by NCAR, NOAA/ESRL and NOAA/NCEP/EMC with partnerships at AFWA, FAA, DOE/PNNL and collaborations with universities and other government agencies in the US and overseas
WRF Community Model
- Version 1.0 WRF was released December 2000
- Version 2.0: May 2004 (add nesting)
- Version 3.0: April 2008 (add global ARW version)
- ... (major releases in April, minor releases in summer)
- Version 3.8: April 2016
- Version 3.8.1: August 2016
- Version 3.9: April 2017
- Version 3.9.1(.1) (August 2017)
What is ARW?
WRF has two dynamical cores: The Advanced Research WRF (ARW) and Non-hydrostatic Mesoscale Model (NMM)
Dynamical core includes mostly advection, pressure-gradients, Coriolis, buoyancy, filters, diffusion, and time-stepping
Both are Eulerian mass dynamical cores with terrain-following vertical coordinates
ARW support and development are centered at NCAR/MMM
NMM development is centered at NCEP/EMC and support is provided by NCAR/DTC (operationally now only used for HWRF)
Usage of WRF
ARW and NMM
- Atmospheric physics/parameterization research
- Case-study research
- Real-time NWP and forecast system research
- Data assimilation research
- Teaching dynamics and NWP
ARW only
- Regional climate and seasonal time-scale research
- Coupled-chemistry applications
- Global simulations
- Idealized simulations at many scales (e.g. convection, baroclinic waves, large eddy simulations)
Examples of WRF Forecast
- Hurricane Katrina (August, 2005): Moving 4 km nest in a 12 km outer domain
- US Convective System (June, 2005): Single 4 km central US domain
Real-Data Applications
- Numerical weather prediction
- Meteorological case studies
- Regional climate
- Applications: air quality, wind energy, hydrology, etc.
Ref: https://www.climatescience.org.au/sites/default/files/WRF_Overview_Dudhia_3.9.pdf
Ref: http://www2.mmm.ucar.edu/wrf/users/
Ref: http://www2.mmm.ucar.edu/wrf/users/
CALPUFF
CALPUFF is an advanced, integrated Lagrangian puff modeling system for the simulation of atmospheric pollution dispersion distributed by the Atmospheric Studies Group at TRC Solutions.
It is maintained by the model developers and distributed by TRC. The model has been adopted by the United States Environmental Protection Agency (EPA) in its Guideline on Air Quality Models as a preferred model for assessing long range transport of pollutants and their impacts on Federal Class I areas and on a case-by-case basis for certain near-field applications involving complex meteorological conditions.
The integrated modeling system consists of three main components and a set of preprocessing and postprocessing programs. The main components of the modeling system are CALMET (a diagnostic 3-dimensional meteorological model), CALPUFF (an air quality dispersion model), and CALPOST (a postprocessing package). Each of these programs has a graphical user interface (GUI). In addition to these components, there are numerous other processors that may be used to prepare geophysical (land use and terrain) data in many standard formats, meteorological data (surface, upper air, precipitation, and buoy data), and interfaces to other models such as the Penn State/NCAR Mesoscale Model (MM5), the National Centers for Environmental Prediction (NCEP) Eta model and the RAMS meteorological model.
The CALPUFF model is designed to simulate the dispersion of buoyant, puff or continuous point and area pollution sources as well as the dispersion of buoyant, continuous line sources. The model also includes algorithms for handling the effect of downwash by nearby buildings in the path of the pollution plumes.
It is maintained by the model developers and distributed by TRC. The model has been adopted by the United States Environmental Protection Agency (EPA) in its Guideline on Air Quality Models as a preferred model for assessing long range transport of pollutants and their impacts on Federal Class I areas and on a case-by-case basis for certain near-field applications involving complex meteorological conditions.
The integrated modeling system consists of three main components and a set of preprocessing and postprocessing programs. The main components of the modeling system are CALMET (a diagnostic 3-dimensional meteorological model), CALPUFF (an air quality dispersion model), and CALPOST (a postprocessing package). Each of these programs has a graphical user interface (GUI). In addition to these components, there are numerous other processors that may be used to prepare geophysical (land use and terrain) data in many standard formats, meteorological data (surface, upper air, precipitation, and buoy data), and interfaces to other models such as the Penn State/NCAR Mesoscale Model (MM5), the National Centers for Environmental Prediction (NCEP) Eta model and the RAMS meteorological model.
The CALPUFF model is designed to simulate the dispersion of buoyant, puff or continuous point and area pollution sources as well as the dispersion of buoyant, continuous line sources. The model also includes algorithms for handling the effect of downwash by nearby buildings in the path of the pollution plumes.
NetCDF
NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The project homepage is hosted by the Unidata program at the University Corporation for Atmospheric Research (UCAR). They are also the chief source of netCDF software, standards development, updates, etc. The format is an open standard. NetCDF Classic and 64-bit Offset Format are an international standard of the Open Geospatial Consortium.
The project started in 1989 and is still actively supported by UCAR. Version 3.x (released in 1997) is still widely used across the world and maintained by UCAR (most recent update 2017). Version 4.0 (released in 2008) allows the use of the HDF5 data file format. Version 4.1 (2010) adds support for C and Fortran client access to specified subsets of remote data via OPeNDAP. Further releases have improved performance, added features, and fixed bugs.
The format was originally based on the conceptual model of the Common Data Format developed by NASA, but has since diverged and is not compatible with it.
Starting with version 4.0, the netCDF API allows the use of the HDF5 data format. NetCDF users can create HDF5 files with benefits not available with the netCDF format, such as much larger files and multiple unlimited dimensions.
Full backward compatibility in accessing old netCDF files and using previous versions of the C and Fortran APIs is supported.
It is an input/output format for many GIS applications, and for general scientific data exchange.
Parallel I/O in the Unidata netCDF library has been supported since release 4.0, for HDF5 data files. Since version 4.1.1 the Unidata NetCDF C library supports parallel I/O to classic and 64-bit offset files using the Parallel-NetCDF library, but with the NetCDF API.
The project started in 1989 and is still actively supported by UCAR. Version 3.x (released in 1997) is still widely used across the world and maintained by UCAR (most recent update 2017). Version 4.0 (released in 2008) allows the use of the HDF5 data file format. Version 4.1 (2010) adds support for C and Fortran client access to specified subsets of remote data via OPeNDAP. Further releases have improved performance, added features, and fixed bugs.
The format was originally based on the conceptual model of the Common Data Format developed by NASA, but has since diverged and is not compatible with it.
Format
The netCDF libraries support multiple different binary formats for netCDF files:- The classic format was used in the first netCDF release, and is still the default format for file creation.
- The 64-bit offset format was introduced in version 3.6.0, and it supports larger variable and file sizes.
- The netCDF-4/HDF5 format was introduced in version 4.0; it is the HDF5 data format, with some restrictions.
- The HDF4 SD format is supported for read-only access.
- The CDF5 format is supported, in coordination with the parallel-netcdf project.
Starting with version 4.0, the netCDF API allows the use of the HDF5 data format. NetCDF users can create HDF5 files with benefits not available with the netCDF format, such as much larger files and multiple unlimited dimensions.
Full backward compatibility in accessing old netCDF files and using previous versions of the C and Fortran APIs is supported.
Access libraries
The software libraries supplied by UCAR provide read-write access to netCDF files, encoding and decoding the necessary arrays and metadata. The core library is written in C, and provides an API for C, C++ and two APIs for Fortran applications, one for Fortran 77, and one for Fortran 90. An independent implementation, also developed and maintained by Unidata, is written in 100% Java, which extends the core data model and adds additional functionality. Interfaces to netCDF based on the C library are also available in other languages including R (ncdf, ncvar and RNetCDF packages), Perl, Python, Ruby, Haskell, Mathematica, MATLAB, IDL, and Octave. The specification of the API calls is very similar across the different languages, apart from inevitable differences of syntax. The API calls for version 2 were rather different from those in version 3, but are also supported by versions 3 and 4 for backward compatibility. Application programmers using supported languages need not normally be concerned with the file structure itself, even though it is available as open formats.Common uses
It is commonly used in climatology, meteorology and oceanography applications (e.g., weather forecasting, climate change) and GIS applications.It is an input/output format for many GIS applications, and for general scientific data exchange.
Parallel-NetCDF
An extension of netCDF for parallel computing called Parallel-NetCDF (or PnetCDF) has been developed by Argonne National Laboratory and Northwestern University. This is built upon MPI-IO, the I/O extension to MPI communications. Using the high-level netCDF data structures, the Parallel-NetCDF libraries can make use of optimizations to efficiently distribute the file read and write applications between multiple processors. The Parallel-NetCDF package can read/write only classic and 64-bit offset formats. Parallel-NetCDF cannot read or write the HDF5-based format available with netCDF-4.0. The Parallel-NetCDF package uses different, but similar APIs in Fortran and C.Parallel I/O in the Unidata netCDF library has been supported since release 4.0, for HDF5 data files. Since version 4.1.1 the Unidata NetCDF C library supports parallel I/O to classic and 64-bit offset files using the Parallel-NetCDF library, but with the NetCDF API.
GRIB format
GRIB (GRIdded Binary or General Regularly-distributed Information in Binary form) is a concise data format commonly used in meteorology to store historical and forecast weather data. It is standardized by the World Meteorological Organization's
Commission for Basic Systems, known under number GRIB FM 92-IX,
described in WMO Manual on Codes No.306.
Currently there are three versions of GRIB.
Each GRIB record has two components - the part that describes the record (the header), and the actual binary data itself. The data in GRIB-1 are typically converted to integers using scale and offset, and then bit-packed. GRIB-2 also has the possibility of compression.
GRIB superseded the Aeronautical Data Format (ADF).
The World Meteorological Organization (WMO) Commission for Basic Systems (CBS) met in 1985 to create the GRIB (GRIdded Binary) format. The WGDM in February 1994, after major changes, approved revision 1 of the GRIB format. GRIB Edition 2 format was approved in 2003 at Geneva.
If a description of the spatial organization of the data is needed, the GDS must be included as well. This information includes spectral (harmonics of divergence and vorticity) vs gridded data (Gaussian, X-Y grid), horizontal resolution, and the location of the origin.
Currently there are three versions of GRIB.
- Version 0 was used to a limited extent by projects such as TOGA, and is no longer in operational use.
- The first edition (current sub-version is 2) is used operationally worldwide by most meteorological centers, for Numerical Weather Prediction output (NWP).
- A newer generation has been introduced, known as GRIB second edition, and data is slowly changing over to this format. Some of the second-generation GRIB are used for derived product distributed in Eumetcast of Meteosat Second Generation. Another example is the NAM (North American Mesoscale) model.
File Format
GRIB files are a collection of self-contained records of 2D data, and the individual records stand alone as meaningful data, with no references to other records or to an overall schema. So collections of GRIB records can be appended to each other or the records separated.Each GRIB record has two components - the part that describes the record (the header), and the actual binary data itself. The data in GRIB-1 are typically converted to integers using scale and offset, and then bit-packed. GRIB-2 also has the possibility of compression.
GRIB superseded the Aeronautical Data Format (ADF).
The World Meteorological Organization (WMO) Commission for Basic Systems (CBS) met in 1985 to create the GRIB (GRIdded Binary) format. The WGDM in February 1994, after major changes, approved revision 1 of the GRIB format. GRIB Edition 2 format was approved in 2003 at Geneva.
Problems with GRIB
No way in GRIB to describe a collection of GRIB records- Each record is independent, with no way to reference the GRIB writer's intended schema
- No foolproof way to combine records into the multidimensional arrays from which they were derived.
- No authoritative place for centers to publish their local tables.
- Inconsistent and incorrect methods of versioning local tables.
- No machine-readable versions of the WMO tables (now available for GRIB-2, but not GRIB-1)
GRIB 1 Header
There are 2 parts of the GRIB 1 header - one mandatory (Product Definition Section - PDS) and one optional (Grid Description Section - GDS). The PDS describes who created the data (the research/operation center), the involved numerical model/process - can be NWP or GCM, the data that is actually stored (such as wind, temperature, ozone concentration etc.), units of the data (meters, pressure etc.), vertical system of the data (constant height, constant pressure, constant potential temperature), and the time stamp.If a description of the spatial organization of the data is needed, the GDS must be included as well. This information includes spectral (harmonics of divergence and vorticity) vs gridded data (Gaussian, X-Y grid), horizontal resolution, and the location of the origin.
Subscribe to:
Comments (Atom)
apt install through corporate proxy
Assuming proxy service like CNTLM is up and running on Ubuntu machine, one can use apt-get to install package with specifying http proxy inf...
-
While using remote SSH client MobaXTerm to open up X-11 forwarded GUI app with root privileges, an error message pops up: MobaXterm X11 pr...
-
The perfect situation is that the existing Flask app in Python is playing nicely while new project using Plotly Dash would like to join in. ...
-
While I was trying to load up some GUI apps from remote SSH server with X11 forwarding and the error pops up: libGL error: failed to load ...
