Skip to content

Releases: MobileTeleSystems/RecTools

0.17.0

03 Sep 09:29
6ca2532

Choose a tag to compare

0.16.0

27 Jul 11:23
46deae3

Choose a tag to compare

✨ Highlights ✨

HSTU architecture from "Actions Speak Louder then Words..." is now available in RecTools as HSTUModel, fully compatible with our fit / recommend paradigm and capable of context-aware recommendations.

Added

  • HSTU Model from "Actions Speak Louder then Words..." implemented in the class HSTUModel (#290)
  • leave_one_out_mask function (rectools.models.nn.transformers.utils.leave_one_out_mask) for applying leave-one-out validation during transformer models training.(#292)
  • logits_t argument to TransformerLightningModuleBase. It is used to scale logits when computing the loss. (#290)
  • use_scale_factor argument to LearnableInversePositionalEncoding. It scales embeddings by the square root of their dimension — following the original approach from the "Attention Is All You Need" (#290)
  • Optional context argument to recommend method of models and get_context function to rectools.dataset.context.py (#290)

Fixed

  • [Breaking] Corrected computation of cosine distance in DistanceSimilarityModule(#290)
  • Installation issue with cupy extra on macOS (#293)
  • torch.dtype object has no attribute 'kind' error in TorchRanker (#293)

Removed

  • [Breaking] Dropout module from IdEmbeddingsItemNet. This changes model behaviour during training, so model results starting from this release might slightly differ from previous RecTools versions even when the random seed is fixed.(#290)

All contributors

@teodor-r @feldlime @blondered

0.15.0

17 Jul 10:31
af43135

Choose a tag to compare

Added

  • Support for resaving transformer models multiple times and loading trainer state (#289)
  • extras argument to SequenceDataset, extra_cols argument to TransformerDataPreparatorBase, session_tower_forward and item_tower_forward methods to SimilarityModuleBase (#287)

Fixed

  • [Breaking] Now LastNSplitter guarantees taking the last ordered interaction in dataframe in case of identical timestamps (#288)

All contributors

@spirinamayya @nsundalov @teodor-r

0.14.0

16 May 11:11
ea266cd

Choose a tag to compare

Added

  • Python 3.13 support (#227)
  • fit_partial implementation for Transformer-based models (#273)
  • map_location and model_params_update arguments for the function load_from_checkpoint for Transformer-based models. Use map_location to explicitly specify the computing new device and model_params_update to update original model parameters (e.g. remove training-specific parameters that are not needed anymore) (#281)
  • get_val_mask_func_kwargs and get_trainer_func_kwargs arguments for Transformer-based models to allow keyword arguments in custom functions used for model training. (#280)

All contributors

@chezou @spirinamayya @teodor-r

0.13.0

10 Apr 14:41
6ffd25b

Choose a tag to compare

✨ Highlights ✨

Transformer models get more customization options, including negative sampling, similarity functions and backbone model. Sampled softmax loss is now supported by all models and cosine similarity function is now available out of the box.

Added

  • TransformerNegativeSamplerBase and CatalogUniformSampler classes, negative_sampler_type and negative_sampler_kwargs parameters to transformer-based models (#275)
  • SimilarityModuleBase, DistanceSimilarityModule, similarity module to TransformerTorchBackbone parameters to transformer-based models similarity_module_type, similarity_module_kwargs (#272)
  • TransformerBackboneBase, backbone_type and backbone_kwargs parameters to transformer-based models (#277)
  • sampled_softmax loss option for transformer models (#274)
  • out_dim property to IdEmbeddingsItemNet, CatFeaturesItemNet and SumOfEmbeddingsConstructor (#276)

All contributors

@spirinamayya @In48semenov @blondered

0.12.0

24 Feb 13:28
c5dc5f7

Choose a tag to compare

All updates

Added

  • CatalogCoverage metric (#266, #267)
  • divide_by_achievable argument to NDCG metric (#266)

Changed

  • Interactions extra columns are not dropped in Dataset.filter_interactions method #267

0.11.0

17 Feb 15:58
e8728b3

Choose a tag to compare

✨ Highlights ✨

Transformer models are here!

BERT4Rec and SASRec are fully compatible with RecTools fit / recommend paradigm and require NO special data processing. We have proven top performance on public benchmarks. For details on models see Transformers Theory & Practice Tutorial.

Our transformer models are configurable, customizable, callback-friendly, checkpoints-included, logs-out-of-the-box, custom-validation-ready, multi-gpu-compatible! See Transformers Advanced Training Guide and Transformers Customization Guide

All updates

Added

  • SASRecModel and BERT4RecModel - models based on transformer architecture (#220)
  • Transfomers extended theory & practice tutorial, advanced training guide and customization guide (#220)
  • use_gpu for PureSVD (#229)
  • from_params method for models and model_from_params function (#252)
  • TorchRanker ranker which calculates scores using torch. Supports GPU. #251
  • Ranker ranker protocol which unify rankers call. #251

Changed

  • ImplicitRanker rank method compatible with Ranker protocol. use_gpu and num_threads params moved from rank method to __init__. #251

New contributors

@nsundalov made their first contribution in #251

All contributors

@feldlime @blondered @spirinamayya @chezou @In48semenov

0.10.0

16 Jan 20:21
fb59ed6

Choose a tag to compare

✨ Highlights ✨

Bayesian Personalized Ranking Matrix Factorization (BPR-MF) algorithm is now in the framework!
See model detail in our extended baselines tutorial

All updates

Added

  • ImplicitBPRWrapperModel model with algorithm description in extended baselines tutorial (#232, #239)
  • All vector models and EASEModel support for enabling ranking on GPU and selecting number of threads for CPU ranking. Added recommend_n_threads and recommend_use_gpu_ranking parameters to EASEModel, ImplicitALSWrapperModel, ImplicitBPRWrapperModel, PureSVDModel and DSSMModel. Added recommend_use_gpu_ranking to LightFMWrapperModel. GPU and CPU ranking may provide different ordering of items with identical scores in recommendation table, so this could change ordering items in recommendations since GPU ranking is now used as a default one. (#218)

0.9.0

11 Dec 15:31
a1c1a2f

Choose a tag to compare

✨ Highlights ✨

  • Models initialisation from configs is introduced! As well as getting hyper-params and getting configs. We have one common function for all models: model_from_config. And we have methods for specific models: from_config, get_config, get_params.
  • Models saving and loading is introduced with load_model common function and model methods save and load.
    Please see details for configs and save+load usage in example. All models support new functions except DSSMModel.
  • fit_partial method is introduced for ImplicitALSWrapperModel and LightFMWrapperModel. These models can now resume training from the previous point.
  • LightFM Python 3.12+ support!

All updates

Added

  • from_config, get_config and get_params methods to all models except neural-net-based (#170)
  • fit_partial implementation for ImplicitALSWrapperModel and LightFMWrapperModel (#203, #210, #223)
  • save and load methods to all of the models (#206)
  • Model configs example (#207,#219)
  • use_gpu argument to ImplicitRanker.rank method (#201)
  • keep_extra_cols argument to Dataset.construct and Interactions.from_raw methods. include_extra_cols argument to Dataset.get_raw_interactions and Interactions.to_external methods (#208)
  • dtype adjustment to recommend, recommend_to_items methods of ModelBase (#211)
  • load_model function (#213)
  • model_from_config function (#214)
  • get_cat_features method to SparseFeatures (#221)
  • LightFM Python 3.12+ support (#224)

Fixed

  • Implicit ALS matrix zero assignment size (#228)

Removed

  • [Breaking] Python 3.8 support (#222)

New contributors

@spirinamayya made their first contribution in #211
@Waujito made their first contribution in #201

0.8.0

28 Aug 14:57
8a3b716

Choose a tag to compare

✨ Highlights ✨

Option for debiased calculation to all of the TruePositive-based metrics (both ranking & classification). See our Debiased metrics calculation user guide for full info. Pass debias_config during metric's initialization to enable this feature.

All updates

Added

  • debias_config parameter for classification and ranking metrics.
  • New parameter is_debiased to calc_from_confusion_df, calc_per_user_from_confusion_df methods of classification metrics, calc_from_fitted, calc_per_user_from_fitted methods of auc and rankning (MAP) metrics, calc_from_merged, calc_per_user_from_merged methods of ranking (NDCG, MRR) metrics. (#152)
  • nbformat >= 4.2.0 dependency to [visuals] extra (#169)
  • filter_interactions method of Dataset (#177)
  • on_unsupported_targets parameter to recommend and recommend_to_items model methods (#177)
  • Use nmslib-metabrainz for Python 3.11 and upper (#180)

Fixed

  • display() method in MetricsApp (#169)
  • IntraListDiversity metric computation in cross_validate (#177)
  • Allow warp-kos loss for LightFMWrapperModel (#175)

Removed

  • [Breaking] assume_external_ids parameter in recommend and recommend_to_items model methods (#177)

New contributors

@chezou made their first contribution in #180 and #175