How to reconstruct data in pygplates using the model in Li2023

Hi all,

I am trying to use the Li2023 model in gplates to reconstruct my zircon dataset. The Li2023 model covers a range of 540-2000Ma, which means that the starting time of this model is not today, but rather 540Ma ago. The issue I am facing now is that the latitude and longitude coordinates of my zircon dataset are for today, which I think is causing my reconstruction to fail. If someone could tell me how to use this model to reconstruct my data, or have better ideas, I would be very grateful!

Can you be more specific about the failure of reconstruction? The present-day coordinates should not be a problem. Can you show me your code and data?

Thank you mchin!

I am trying to reconstruct the latitude and longitude of 1000ma~2000ma, but pygplates returns me today’s latitude and longitude. But you may be right, maybe it’s because I used the wrong rotation file or static polygon file?

Sorry, I don’t know how to upload data files here, but the zircon data file can be found in this article, and the file from Li2023 can be found here, dataset 4.

Here is the code:

# %% import lib
import pandas as pd
import numpy as np
import pygplates

# %%
def reconstruct_coordinates(pygplates_recon_geom):
    rlats = []
    rlons = []
    for i in pygplates_recon_geom:
        # reconstructed coordinates
        recon_point = i.get_reconstructed_geometry()
        rlat, rlon = recon_point.to_lat_lon()
    return rlats, rlons

# %% Reconstruct zircon data with PyGPlates
# Let's import some plate reconstruction files, since we'll need this very soon. 
rotation_filename = './plate-model-repo/li2023/EDRG_90W_2000-540Ma.rot'
# static_polygon_file = './plate-model-repo/li2023/EDRG_topology_90W_2000-540Ma.gpml'
# static_polygon_file = './plate-model-repo/li2023/EDRG_topology_90W_2000-540Ma.gpml'
static_polygon_file = './plate-model-repo/li2023/Continental_crust.shp'

# read into pygplates
rotation_model = pygplates.RotationModel(rotation_filename)
static_polygons = pygplates.FeatureCollection(static_polygon_file)
# %%
hf = pd.read_excel('data/41597_2023_2902_MOESM4_ESM.xlsx', sheet_name='Lu-Hf_Data')
# hf.columns.tolist()
# cols = ['Ref-Sample Key', 'Sample-ID', 'U-Pb    Age  (Ma)', 'εHf(t) calc', 'εHf(t) 2σ calc  ', 'Latitude', 'Longitude']
# hf = hf[cols]
# hf.dropna(axis=0, how='any', inplace=True)
age_min = 1000
age_max = 2000
age_range = age_max - age_min
hf = hf[(hf['U-Pb    Age  (Ma)'] >= age_min) & (hf['U-Pb    Age  (Ma)'] <= age_max)]
# split 1000 million years into 50 bins
nbins = 50
age_itv = age_range / nbins
binedges = np.linspace(age_min, age_max, nbins + 1)
hf['age_bin'] = pd.cut(hf['U-Pb    Age  (Ma)'], binedges, include_lowest=True, labels=np.arange(0, nbins))
hf.reset_index(drop=True, inplace=True)
# %% convert zircon data to gplates feature
zir_point_features = []

for index, row in hf.iterrows():
    point = pygplates.PointOnSphere(float(row.Latitude), float(row.Longitude))  # use the lat and lng columns as our coordinates
    zir_point_feature = pygplates.Feature()
    # set time the point is valid for
    zir_point_feature.set_valid_time(row['U-Pb    Age  (Ma)'] + 0.5 * age_itv, row['U-Pb    Age  (Ma)'] - 0.5 * age_itv)  # assign ages for each point
    zir_point_features.append(zir_point_feature)   # save to zir_point_features
# %% Assign plate IDs to the zircons data so that each point now has a plate ID.
# This can take a while for really large datasets

zir_cc = pygplates.partition_into_plates(static_polygons, rotation_model, 
                                          properties_to_copy = [pygplates.PartitionProperty.reconstruction_plate_id])
# output_points_filename = 'hf_zir_with_plate_ids.gpml'
# # Write the assigned point features to the output GPML file (ready for use in GPlates).
# assigned_point_feature_collection = pygplates.FeatureCollection(zir_cc)
# assigned_point_feature_collection.write(output_points_filename)

# %% reconstruct zircon hf
hf['rlat'] = np.nan
hf['rlon'] = np.nan
for i in range(nbins):
    reconstructed_zir_hf = []
    reconstruction_time = 1000 + (i + 1) * age_itv - 0.5 * age_itv
    hf_bin_idx = hf.loc[hf['age_bin'] == i].index.tolist()
    zir_cc_bin = [zir_cc[i] for i in hf_bin_idx]  # get the zircon features for each age bin
    pygplates.reconstruct(zir_cc_bin, rotation_model, reconstructed_zir_hf, reconstruction_time)
    rlats, rlons = reconstruct_coordinates(reconstructed_zir_hf)
    # print("length of rlats:", len(rlats), "length of rlons:", len(rlons), "length of hf_bin_idx:", len(hf_bin_idx))
    hf.iloc[hf_bin_idx, -2] = rlats
    hf.iloc[hf_bin_idx, -1] = rlons
# %%

I’d appreciate it if you could tell me what happened!

The problem is that the pygplates.partition_into_plates() failed to assign plate IDs for your points.

You have used li2023/Continental_crust.shp to partition the points. However, those polygons do not exist at present-day(0Ma). Hence, you cannot use those polygons to partition the present-day coordinates of your points. Even if the polygons exist at 0Ma, they cannot cover all your points. See the screenshots below.

You need proper static polygons to assign plate IDs for your points.

See the valid time of the polygons.

See there is no polygons at 0Ma.

1 Like

As Dietmar has pointed out that

The li2023 model does not have static polygons as it does not have rotations from 0-540 Ma. In other words, it does not come with any data that exist at present-day, hence you cannot use that model to partition and rotate any present-day data.

Thank you for your detailed explanation!

I am wondering if we can first use another model (such as muller2022) to reconstruct today’s coordinates to 540Ma, and then use Li2023 to reconstruct the ancient positions of these samples in the range of 540~2000Ma?

Do you think this is feasible, or do you know of any other pygplates-compatible models capable of reconstructing ancient positions from 0~2000ma?

Thank you!

Have you read this article before? The first author had worked with us for a few years. He might have something useful.

You cannot mix different models.

I have not read this article, thanks for the info!