Releases: MobileTeleSystems/RecTools
0.17.0
Added
- LiGR transformer layers from "From Features to Transformers: Redefining Ranking for Scalable Impact" (#295)
All contributors
0.16.0
✨ Highlights ✨
HSTU architecture from "Actions Speak Louder then Words..." is now available in RecTools as HSTUModel, fully compatible with our fit / recommend paradigm and capable of context-aware recommendations.
Added
- HSTU Model from "Actions Speak Louder then Words..." implemented in the class
HSTUModel(#290) leave_one_out_maskfunction (rectools.models.nn.transformers.utils.leave_one_out_mask) for applying leave-one-out validation during transformer models training.(#292)logits_targument toTransformerLightningModuleBase. It is used to scale logits when computing the loss. (#290)use_scale_factorargument toLearnableInversePositionalEncoding. It scales embeddings by the square root of their dimension — following the original approach from the "Attention Is All You Need" (#290)- Optional
contextargument torecommendmethod of models andget_contextfunction torectools.dataset.context.py(#290)
Fixed
- [Breaking] Corrected computation of
cosinedistance inDistanceSimilarityModule(#290) - Installation issue with
cupyextra on macOS (#293) torch.dtype object has no attribute 'kind'error inTorchRanker(#293)
Removed
- [Breaking]
Dropoutmodule fromIdEmbeddingsItemNet. This changes model behaviour during training, so model results starting from this release might slightly differ from previous RecTools versions even when the random seed is fixed.(#290)
All contributors
0.15.0
Added
- Support for resaving transformer models multiple times and loading trainer state (#289)
extrasargument toSequenceDataset,extra_colsargument toTransformerDataPreparatorBase,session_tower_forwardanditem_tower_forwardmethods toSimilarityModuleBase(#287)
Fixed
- [Breaking] Now
LastNSplitterguarantees taking the last ordered interaction in dataframe in case of identical timestamps (#288)
All contributors
0.14.0
Added
- Python 3.13 support (#227)
fit_partialimplementation for Transformer-based models (#273)map_locationandmodel_params_updatearguments for the functionload_from_checkpointfor Transformer-based models. Usemap_locationto explicitly specify the computing new device andmodel_params_updateto update original model parameters (e.g. remove training-specific parameters that are not needed anymore) (#281)get_val_mask_func_kwargsandget_trainer_func_kwargsarguments for Transformer-based models to allow keyword arguments in custom functions used for model training. (#280)
All contributors
0.13.0
✨ Highlights ✨
Transformer models get more customization options, including negative sampling, similarity functions and backbone model. Sampled softmax loss is now supported by all models and cosine similarity function is now available out of the box.
Added
TransformerNegativeSamplerBaseandCatalogUniformSamplerclasses,negative_sampler_typeandnegative_sampler_kwargsparameters to transformer-based models (#275)SimilarityModuleBase,DistanceSimilarityModule, similarity module toTransformerTorchBackboneparameters to transformer-based modelssimilarity_module_type,similarity_module_kwargs(#272)TransformerBackboneBase,backbone_typeandbackbone_kwargsparameters to transformer-based models (#277)sampled_softmaxloss option for transformer models (#274)out_dimproperty toIdEmbeddingsItemNet,CatFeaturesItemNetandSumOfEmbeddingsConstructor(#276)
All contributors
0.12.0
0.11.0
✨ Highlights ✨
Transformer models are here!
BERT4Rec and SASRec are fully compatible with RecTools fit / recommend paradigm and require NO special data processing. We have proven top performance on public benchmarks. For details on models see Transformers Theory & Practice Tutorial.
Our transformer models are configurable, customizable, callback-friendly, checkpoints-included, logs-out-of-the-box, custom-validation-ready, multi-gpu-compatible! See Transformers Advanced Training Guide and Transformers Customization Guide
All updates
Added
SASRecModelandBERT4RecModel- models based on transformer architecture (#220)- Transfomers extended theory & practice tutorial, advanced training guide and customization guide (#220)
use_gpufor PureSVD (#229)from_paramsmethod for models andmodel_from_paramsfunction (#252)TorchRankerranker which calculates scores using torch. Supports GPU. #251Rankerranker protocol which unify rankers call. #251
Changed
ImplicitRankerrankmethod compatible withRankerprotocol.use_gpuandnum_threadsparams moved fromrankmethod to__init__. #251
New contributors
@nsundalov made their first contribution in #251
All contributors
0.10.0
✨ Highlights ✨
Bayesian Personalized Ranking Matrix Factorization (BPR-MF) algorithm is now in the framework!
See model detail in our extended baselines tutorial
All updates
Added
ImplicitBPRWrapperModelmodel with algorithm description in extended baselines tutorial (#232, #239)- All vector models and
EASEModelsupport for enabling ranking on GPU and selecting number of threads for CPU ranking. Addedrecommend_n_threadsandrecommend_use_gpu_rankingparameters toEASEModel,ImplicitALSWrapperModel,ImplicitBPRWrapperModel,PureSVDModelandDSSMModel. Addedrecommend_use_gpu_rankingtoLightFMWrapperModel. GPU and CPU ranking may provide different ordering of items with identical scores in recommendation table, so this could change ordering items in recommendations since GPU ranking is now used as a default one. (#218)
0.9.0
✨ Highlights ✨
- Models initialisation from configs is introduced! As well as getting hyper-params and getting configs. We have one common function for all models:
model_from_config. And we have methods for specific models:from_config,get_config,get_params. - Models saving and loading is introduced with
load_modelcommon function and model methodssaveandload.
Please see details for configs and save+load usage in example. All models support new functions exceptDSSMModel. fit_partialmethod is introduced forImplicitALSWrapperModelandLightFMWrapperModel. These models can now resume training from the previous point.- LightFM Python 3.12+ support!
All updates
Added
from_config,get_configandget_paramsmethods to all models except neural-net-based (#170)fit_partialimplementation forImplicitALSWrapperModelandLightFMWrapperModel(#203, #210, #223)saveandloadmethods to all of the models (#206)- Model configs example (#207,#219)
use_gpuargument toImplicitRanker.rankmethod (#201)keep_extra_colsargument toDataset.constructandInteractions.from_rawmethods.include_extra_colsargument toDataset.get_raw_interactionsandInteractions.to_externalmethods (#208)- dtype adjustment to
recommend,recommend_to_itemsmethods ofModelBase(#211) load_modelfunction (#213)model_from_configfunction (#214)get_cat_featuresmethod toSparseFeatures(#221)- LightFM Python 3.12+ support (#224)
Fixed
- Implicit ALS matrix zero assignment size (#228)
Removed
- [Breaking] Python 3.8 support (#222)
New contributors
@spirinamayya made their first contribution in #211
@Waujito made their first contribution in #201
0.8.0
✨ Highlights ✨
Option for debiased calculation to all of the TruePositive-based metrics (both ranking & classification). See our Debiased metrics calculation user guide for full info. Pass debias_config during metric's initialization to enable this feature.
All updates
Added
debias_configparameter for classification and ranking metrics.- New parameter
is_debiasedtocalc_from_confusion_df,calc_per_user_from_confusion_dfmethods of classification metrics,calc_from_fitted,calc_per_user_from_fittedmethods of auc and rankning (MAP) metrics,calc_from_merged,calc_per_user_from_mergedmethods of ranking (NDCG, MRR) metrics. (#152) nbformat >= 4.2.0dependency to[visuals]extra (#169)filter_interactionsmethod ofDataset(#177)on_unsupported_targetsparameter torecommendandrecommend_to_itemsmodel methods (#177)- Use
nmslib-metabrainzfor Python 3.11 and upper (#180)
Fixed
display()method inMetricsApp(#169)IntraListDiversitymetric computation incross_validate(#177)- Allow warp-kos loss for
LightFMWrapperModel (#175)
Removed
- [Breaking]
assume_external_idsparameter inrecommendandrecommend_to_itemsmodel methods (#177)