Skip to content
This repository was archived by the owner on Jan 9, 2023. It is now read-only.

root pandas is now automatically dropping columns... #17

Closed
wants to merge 4 commits into from
Closed
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions root_pandas/readwrite.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
from math import ceil
import re
import ROOT
import warnings

from .utils import stretch

Expand Down Expand Up @@ -147,6 +148,18 @@ def do_flatten(arr):
arr = append_fields(arr_, '__array_index', idx, usemask=False, asrecarray=True)
return arr

def remove_high_dimensions(arr):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about def remove_nonscalar(arr) to make the purpose a bit clearer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to leave the name flexible. But I am okay calling it like this, in which case I will also skip the explicit allowed_dimensions = [0] constant.

allowed_dimensions = [0]
first_row = arr[0]
good_cols = [True if x.ndim in allowed_dimensions else False for x in first_row]
col_names = np.array(list(arr.dtype.names))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need the list here?
Looks like arr.dtype.names is a tuple, which we should be able to convert to an array directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know, thanks!

good_names = col_names[np.array(good_cols)]
bad_names = col_names[np.array([not x for x in good_cols])]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might make sense to define good_cols = np.array([...]) in the first place, then the line above is simpler and here we can do col_names[~good_cols].

for bad_name in bad_names:
warnings.warn("Dropped {bad_name} branch because dimension is unfit for DataFrame"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be better to only emit a single warning in my opinion. How about "Ignored the following non-scalar branches: {bad_names}".format(bad_names=", ".join(bad_names))?

.format(bad_name=bad_name), UserWarning)
return arr[good_names]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a bit of a problem.
We definitely want to avoid making a copy here.
At least on my system, this creates a copy and emits a FutureWarning that this might change in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the FutureWarning make any suggestions regarding alternatives?


if chunksize:
tchain = ROOT.TChain(key)
for path in paths:
Expand All @@ -159,13 +172,15 @@ def genchunks():
arr = root2array(paths, key, all_vars, start=chunk * chunksize, stop=(chunk+1) * chunksize, selection=where, *args, **kwargs)
if flatten:
arr = do_flatten(arr)
arr = remove_high_dimensions(arr)
yield convert_to_dataframe(arr)

return genchunks()

arr = root2array(paths, key, all_vars, selection=where, *args, **kwargs)
if flatten:
arr = do_flatten(arr)
arr = remove_high_dimensions(arr)
return convert_to_dataframe(arr)


Expand Down