Quickstart#
Installation#
Install MagentroPy with pip
:
pip install magentropy
Or, with conda
:
conda install -c conda-forge magentropy
Logging#
The logger can be accessed as follows:
import logging
logger = logging.getLogger('magentropy')
The MagentroData class#
MagentroData
serves as a representation of magnetoentropic data and provides
methods for reading, processing, and plotting the data. Because MagentroPy depends on other packages
such as numpy
, scipy
, and matplotlib
, importing can take several seconds.
from magentropy import MagentroData
Reading data#
The easiest and most common way to read in data is with a Quantum Design .dat
output file,
such as that produced by an MPMS3 SQUID Magnetometer.
The default settings are such that no additional arguments need to be given to the
constructor when this is the case.
See Reading Data for additional information.
magdata = MagentroData('magdata.dat')
"[Data]" tag detected, assuming QD .dat file.
The sample mass was determined from the QD .dat file: 0.1
The sample mass of 0.1 is parsed from the .dat
file. The default mass units are “mg”. We can
easily view a summary of the object when using a notebook:
magdata
Sample mass: 0.1 mg
Raw data:
T | H | M | M_err | M_per_mass | M_per_mass_err | dM_dT | Delta_SM | |
---|---|---|---|---|---|---|---|---|
unit | K | Oe | emu | emu | emu/g | emu/g | cal/K/Oe/g | cal/K/g |
0 | 1.000000 | 20.000001 | 0.002023 | 0.00005 | 20.232376 | 0.5 | NaN | NaN |
1 | 2.000000 | 20.000000 | 0.001977 | 0.00005 | 19.770351 | 0.5 | NaN | NaN |
2 | 3.000001 | 19.999998 | 0.001969 | 0.00005 | 19.691176 | 0.5 | NaN | NaN |
3 | 4.000000 | 19.999999 | 0.001970 | 0.00005 | 19.703463 | 0.5 | NaN | NaN |
4 | 4.999999 | 20.000001 | 0.001886 | 0.00005 | 18.861315 | 0.5 | NaN | NaN |
... | ... | ... | ... | ... | ... | ... | ... | ... |
495 | 95.999998 | 100.000002 | 0.001403 | 0.00005 | 14.030179 | 0.5 | NaN | NaN |
496 | 97.000000 | 99.999999 | 0.001446 | 0.00005 | 14.458498 | 0.5 | NaN | NaN |
497 | 98.000001 | 100.000001 | 0.001324 | 0.00005 | 13.244919 | 0.5 | NaN | NaN |
498 | 98.999999 | 99.999999 | 0.001184 | 0.00005 | 11.835327 | 0.5 | NaN | NaN |
499 | 100.000000 | 100.000000 | 0.001172 | 0.00005 | 11.722362 | 0.5 | NaN | NaN |
500 rows × 8 columns
Converted data:
T | H | M | M_err | M_per_mass | M_per_mass_err | dM_dT | Delta_SM | |
---|---|---|---|---|---|---|---|---|
unit | K | T | A·m² | A·m² | A·m²/kg | A·m²/kg | J/K/T/kg | J/K/kg |
0 | 1.000000 | 0.002 | 0.000002 | 5.000000e-08 | 20.232376 | 0.5 | NaN | NaN |
1 | 2.000000 | 0.002 | 0.000002 | 5.000000e-08 | 19.770351 | 0.5 | NaN | NaN |
2 | 3.000001 | 0.002 | 0.000002 | 5.000000e-08 | 19.691176 | 0.5 | NaN | NaN |
3 | 4.000000 | 0.002 | 0.000002 | 5.000000e-08 | 19.703463 | 0.5 | NaN | NaN |
4 | 4.999999 | 0.002 | 0.000002 | 5.000000e-08 | 18.861315 | 0.5 | NaN | NaN |
... | ... | ... | ... | ... | ... | ... | ... | ... |
495 | 95.999998 | 0.010 | 0.000001 | 5.000000e-08 | 14.030179 | 0.5 | NaN | NaN |
496 | 97.000000 | 0.010 | 0.000001 | 5.000000e-08 | 14.458498 | 0.5 | NaN | NaN |
497 | 98.000001 | 0.010 | 0.000001 | 5.000000e-08 | 13.244919 | 0.5 | NaN | NaN |
498 | 98.999999 | 0.010 | 0.000001 | 5.000000e-08 | 11.835327 | 0.5 | NaN | NaN |
499 | 100.000000 | 0.010 | 0.000001 | 5.000000e-08 | 11.722362 | 0.5 | NaN | NaN |
500 rows × 8 columns
Processed data:
T | H | M | M_err | M_per_mass | M_per_mass_err | dM_dT | Delta_SM | |
---|---|---|---|---|---|---|---|---|
unit | K | T | A·m² | A·m² | A·m²/kg | A·m²/kg | J/K/T/kg | J/K/kg |
Presets:
{
npoints: 1000,
temp_range: [-inf inf],
fields: [],
decimals: 5,
max_diff: inf,
min_sweep_len: 10,
d_order: 2,
lmbds: [nan],
lmbd_guess: 0.0001,
weight_err: True,
match_err: False,
min_kwargs: {'method': 'Nelder-Mead', 'bounds': ((-inf, inf),), 'options': {'maxfev': 50, 'xatol': 0.01, 'fatol': 1e-06}},
add_zeros: False
}
Above, we see the sample mass, raw data, converted data (SI units), processed data
(currently empty), and presets, which are the default data processing settings.
These are available individually as the attributes sample_mass
, raw_df
,
converted_df
, processed_df
, and presets
. For example:
magdata.sample_mass
0.1
Raw units can also be obtained using the get_raw_data_units()
method:
magdata.get_raw_data_units()
{'T': 'K', 'H': 'Oe', 'M': 'emu', 'sample_mass': 'mg'}
Additional information about reading data and changing units can be found in Reading Data and Units and Conversions, respectively.
Processing data#
The various settings in presets
indicate the default arguments for the process_data()
method.
These are explored further in Processing Data. Here, we smooth the magnetic moment,
differentiate with respect to temperature, and integrate with respect to the magnetic field
to fill the missing columns in each data attribute.
magdata.process_data()
The data contains the following 5 magnetic field strengths and observations per field:
20.0 100
40.0 100
60.0 100
80.0 100
100.0 100
Name: T, dtype: int64
Processing data using the following settings:
{
npoints: 1000,
temp_range: [-inf inf],
fields: [],
decimals: 5,
max_diff: inf,
min_sweep_len: 10,
d_order: 2,
lmbds: [nan],
lmbd_guess: 0.0001,
weight_err: True,
match_err: False,
min_kwargs: {'method': 'Nelder-Mead', 'bounds': ((-inf, inf),), 'options': {'maxfev': 50, 'xatol': 0.01, 'fatol': 1e-06}},
add_zeros: False
}
scipy.optimize.minimize: Optimization terminated successfully.
Processed M(T) at field: 20.0
scipy.optimize.minimize: Optimization terminated successfully.
Processed M(T) at field: 40.0
scipy.optimize.minimize: Optimization terminated successfully.
Processed M(T) at field: 60.0
scipy.optimize.minimize: Optimization terminated successfully.
Processed M(T) at field: 80.0
scipy.optimize.minimize: Optimization terminated successfully.
Processed M(T) at field: 100.0
Calculated raw derivative and entropy.
last_presets set to:
{
npoints: 1000,
temp_range: [ 0.99999934 100.00000083],
fields: [ 20. 40. 60. 80. 100.],
decimals: 5,
max_diff: inf,
min_sweep_len: 10,
d_order: 2,
lmbds: [0.00091728 0.00054639 0.00072862 0.00091728 0.00095775],
lmbd_guess: 0.0001,
weight_err: True,
match_err: False,
min_kwargs: {'method': 'Nelder-Mead', 'bounds': ((-inf, inf),), 'options': {'maxfev': 50, 'xatol': 0.01, 'fatol': 1e-06}},
add_zeros: False
}
Finished.
The field groups and regularization (smoothing) parameters are determined automatically by default.
We can see that there are five different magnetic field strengths with 100 temperature points each.
The processed_df
attribute now contains the results:
magdata.processed_df
T | H | M | M_err | M_per_mass | M_per_mass_err | dM_dT | Delta_SM | |
---|---|---|---|---|---|---|---|---|
0 | 0.999999 | 0.002 | 0.000002 | NaN | 19.847003 | NaN | -0.098265 | -0.000098 |
1 | 1.099098 | 0.002 | 0.000002 | NaN | 19.837267 | NaN | -0.098240 | -0.000098 |
2 | 1.198198 | 0.002 | 0.000002 | NaN | 19.827533 | NaN | -0.098202 | -0.000098 |
3 | 1.297297 | 0.002 | 0.000002 | NaN | 19.817803 | NaN | -0.098138 | -0.000098 |
4 | 1.396396 | 0.002 | 0.000002 | NaN | 19.808082 | NaN | -0.098050 | -0.000098 |
... | ... | ... | ... | ... | ... | ... | ... | ... |
4995 | 99.603604 | 0.010 | 0.000001 | NaN | 11.673323 | NaN | -0.829550 | -0.004619 |
4996 | 99.702704 | 0.010 | 0.000001 | NaN | 11.591120 | NaN | -0.829466 | -0.004620 |
4997 | 99.801803 | 0.010 | 0.000001 | NaN | 11.508924 | NaN | -0.829407 | -0.004620 |
4998 | 99.900902 | 0.010 | 0.000001 | NaN | 11.426733 | NaN | -0.829371 | -0.004620 |
4999 | 100.000001 | 0.010 | 0.000001 | NaN | 11.344545 | NaN | -0.829347 | -0.004620 |
5000 rows × 8 columns
Similarly, one could check that the derivative and entropy have been calculated for the raw data.
Notice that the error columns in the processed data are empty. An experimental
bootstrap()
method is implemented to estimate the error in the smoothed moment.
See Bootstrap Estimates.
Plotting Data#
import matplotlib.pyplot as plt
Line plots and heat maps can be easily created with plot_lines()
and plot_map()
.
fig, ax = plt.subplots(figsize=(6, 4))
magdata.plot_lines(data_prop='M_per_mass', data_version='compare', ax=ax);
fig, ax = plt.subplots(1, 2, figsize=(9, 3.75), sharey=True)
magdata.plot_map(data_prop='M_per_mass', data_version='converted', ax=ax[0], colorbar=False)
magdata.plot_map(
data_prop='M_per_mass', data_version='processed', ax=ax[1], colorbar=True,
colorbar_kwargs={'ax': ax, 'fraction': 0.10, 'pad': 0.05}
)
ax[1].set_ylabel('');
The second figure demonstrates the natural use of matplotlib
with the plotting methods.
Many additional options are available; see Plotting Data for more information.
Writing output#
Because data is represented as DataFrame
s, one can use methods such as
DataFrame.to_csv()
to write output to files.
See Reading Data for reading in data from delimited files and
Plotting Data for plotting previously-processed data.