A minimal example to show how to use dtype_type to optimize memory footprint.
# sell_prices.csv.zip 
# Source data: https://www.kaggle.com/c/m5-forecasting-uncertainty/
df = pd.read_csv('data/sell_prices.csv')
df.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 6841121 entries, 0 to 6841120
Data columns (total 4 columns):
 #   Column      Dtype  
---  ------      -----  
 0   store_id    object 
 1   item_id     object 
 2   wm_yr_wk    int64  
 3   sell_price  float64
dtypes: float64(1), int64(1), object(2)
memory usage: 957.5 MB
proposed_df = report_on_dataframe(df, unit="MB")
proposed_df
Current dtype Proposed dtype Current Memory (MB) Proposed Memory (MB) Ram Usage Improvement (MB) Ram Usage Improvement (%)
Column
store_id object category 203763.920410 3340.907715 200423.012695 98.360403
item_id object category 233039.977539 6824.677734 226215.299805 97.071456
wm_yr_wk int64 int16 26723.191406 6680.844727 20042.346680 74.999825
sell_price float64 None 26723.191406 NaN NaN NaN

It shows potential dtypes for conversion, you should review if it will cause overflow issue in the future and modify accordingly if needed.

new_df = optimize_dtypes(df, proposed_df)

optimize_dtypes take your df and the proposed_df as an argument to convert the dataframe to the proposed dtypes.

print(f'Original df memory: {df.memory_usage(deep=True).sum()/1024/1024} MB')
print(f'Propsed df memory: {new_df.memory_usage(deep=True).sum()/1024/1024} MB')
Original df memory: 957.5197134017944 MB
Propsed df memory: 85.09655094146729 MB