Blog

  • information-retrieval-tutorial-terrier-cranfield

    πŸ“š Information Retrieval Tutorial using Terrier with Cranfield collection

    Information Retrieval Tutorial using Terrier

    We present in this tutorial a starter guide to understand Information Retrieval in practice using Terrier platform with Cranfield dataset πŸ”—.

    Table of contents

    1. Requirements

    We will use the version 5.5 of Terrier (the last stable version while editing this tutorial). Using this version requires having version 11 or greater of Java. Java command should be also recognized in operating system paths.

    Terrier will be used in this tutorial on Windows. However, all the steps are similar in Linux or Mac OS since Terrier is written in Java and it is portable.

    You can test if Java is well configured by launching terminal on Windows (or Linux) and typing this command

    > java -version
    

    The current installed version of Java should be displayed

    java version "17.0.1" 2021-10-19 LTS
    Java(TM) SE Runtime Environment (build 17.0.1+12-LTS-39)
    

    ⚠️ if you have older version of Java than version 11, you should uinstall it and install a newer version. Find other details in Oracle web site

    2. How to install Terrier v5.5 ?

    You just need to download the zip bin version for Windows (named terrier-project-5.5-bin.zip) from Terrier website and extract it for example under C:\

    Rename Terrier folder with terrier-project-5.5

    You should have subfolders like follows :

    C:\terrier-project-5.5
            β”œβ”€β”€β”€bin
            β”œβ”€β”€β”€doc
            β”œβ”€β”€β”€etc
            β”œβ”€β”€β”€modules
            β”œβ”€β”€β”€share
            β”œβ”€β”€β”€src
            └───var

    3. What is Cranfield dataset ?

    Cranfield will be used as an example of test collection. It is small corpus of 1 400 scientific abstracts and 225 queries. It is considered among the first Information Retrieval initiatives to perform IR tasks in the 1960s.

    Like any other test collection, Cranfield is composed of three parts :

    1. documents to be indexed
    2. a set of test queries (also known as topics)
    3. relevance judgment file (also known as Qrels)

    ⚠️ We suggest to experiment with Cranfield collection for learning purpose only. The collection (having 1 400 documents) is not so reliable when compared to Big Data IR collections used nowadays.

    Terrier platform supports all TREC formatted collections. All Information Retrieval tasks from indexing, retrieving and evaluation can be performed easily on TREC collections. For this purpose, we transformed original Cranfield dataset to TREC format.

    Required Cranfield files and detailed documentation can be found in this repository πŸ”—

    We suggest to extract all downloaded files under a dedicated folder named cranfield under C:\

    C:\cranfield
            β”œβ”€β”€β”€cran.all.1400.xml
            β”œβ”€β”€β”€cran.qry.xml
            └───cranqrel.trec.txt

    4. Indexing documents

    1. Using the command line, go to the Terrier bin folder.
      > cd C:\terrier-project-5.5\bin
    
    1. Setup Terrier for using Cranfield TREC test collection by using the trec_setup.bat script
      > trec_setup.bat C:\cranfield
    

    This will result in the creation of a collection.spec file in the etc Terrier directory. This file contains a list of the document files contained in the specified Cranfield corpus directory.

    We should modify the collection.spec file by removing files that we do not want to index (cran.qry.xml and cranqrel.trec.txt) as described in the trec_setup.bat command result

    ...
    Updated collection.spec file. Please check that it contains all and only all
    files to indexed, or create it manually
    
    collection.spec:
    ----------------------------------------------
    #add the files to index
    C:\cranfield\cran.all.1400.xml
    C:\cranfield\cran.qry.xml
    C:\cranfield\cranqrel.trec.txt
    ----------------------------------------------
    

    collection.spec file should contain only the 1400 documents file path needed to be indexed as follows :

    C:\cranfield\cran.all.1400.xml
    

    Now we are ready to begin the indexing of the collection. This is achieved using the batchindexing command of the terrier script, as follows:

    > terrier batchindexing
    
    ...
    14:25:23.611 [main] INFO  o.terrier.indexing.CollectionFactory - Finished reading collection specification
    14:25:23.625 [main] INFO  o.t.i.MultiDocumentFileCollection - TRECCollection 0% processing C:\cranfield\cran.all.1400.xml
    14:25:23.737 [main] INFO  o.t.structures.indexing.Indexer - creating the data structures data_1
    14:25:23.756 [main] INFO  o.t.s.indexing.LexiconBuilder - LexiconBuilder active - flushing every 100000 documents, or when memory threshold hit
    14:25:24.778 [main] WARN  o.t.structures.indexing.Indexer - Adding an empty document to the index (471) - further warnings are suppressed
    14:25:25.278 [main] INFO  o.t.structures.indexing.Indexer - Collection #0 took 1 seconds to index (1400 documents)
    14:25:25.281 [main] WARN  o.t.structures.indexing.Indexer - Indexed 2 empty documents
    14:25:25.349 [main] INFO  o.t.s.indexing.BaseMetaIndexBuilder - ZstdMetaIndexBuilder meta achieved compression ratio 3.1755476 (> 1 is better)
    14:25:25.418 [main] INFO  o.t.s.indexing.LexiconBuilder - 1 lexicons to merge
    14:25:25.424 [main] INFO  o.t.s.indexing.LexiconBuilder - Optimising structure lexicon
    14:25:25.428 [main] INFO  o.t.s.i.FSOMapFileLexiconUtilities - Optimising lexicon with 6452 entries
    14:25:25.696 [main] INFO  o.t.structures.indexing.Indexer - Started building the inverted index...
    14:25:25.722 [main] INFO  o.t.s.i.c.InvertedIndexBuilder - BasicMemSizeLexiconScanner: lexicon scanning until approx 1.4 GiB of memory is consumed
    14:25:25.733 [main] INFO  o.t.s.i.c.InvertedIndexBuilder - Iteration 1 of 1 (estimated) iterations
    14:25:25.982 [main] INFO  o.t.s.indexing.LexiconBuilder - Optimising structure lexicon
    14:25:25.983 [main] INFO  o.t.s.i.FSOMapFileLexiconUtilities - Optimising lexicon with 6452 entries
    14:25:26.006 [main] INFO  o.t.structures.indexing.Indexer - Finished building the inverted index...
    14:25:26.007 [main] INFO  o.t.structures.indexing.Indexer - Time elapsed for inverted file: 0
    Total time elaped: 2 seconds
    ...

    With Terrier’s default settings, the resulting index will be created in the var\index folder within the Terrier installation folder.

    Once indexing completes, you can verify your index by obtaining its statistics, using the indexstats command of Terrier.

      > terrier indexstats
    
      Collection statistics:
      number of indexed documents: 1400
      size of vocabulary: 6452
      number of tokens: 145321
      number of postings: 89178
      number of fields: 0
      field names: []
      blocks: false

    This displays the number of documents, number of tokens, number of terms, found in the created index.

    The help of available Terrier commands can be displayed by typing terrier as follows

    > terrier
    
    Popular commands:
            batchevaluate           platform        evaluate all run result files in the results directory
            batchindexing           platform        allows a static collection of documents to be indexed
            batchretrieval          platform        performs a batch retrieval "run" over a set of queries
            help                    platform        provides a list of available commands
            interactive             platform        runs an interactive querying session on the commandline
    
    All possible commands:
            batchevaluate           platform        evaluate all run result files in the results directory
            batchindexing           platform        allows a static collection of documents to be indexed
            batchretrieval          platform        performs a batch retrieval "run" over a set of queries
            help                    platform        provides a list of available commands
            help-aliases            platform        provides a list of all available commands and their aliases
            http                    platform        runs a simple JSP webserver, to serve results
            indexstats              platform        display the statistics of an index
            indexutil               platform        utilities for displaying the content of an index
            interactive             platform        runs an interactive querying session on the commandline
            inverted2direct         platform        makes a direct index from a disk index with only an inverted index
            jforests                platform        runs the Jforests LambdaMART LTR implementation
            pbatchretrieval         platform        performs a parallelised batch retrieval "run" over a set of queries
            rest-singleindex        platform        starts a HTTP REST server to serve a single index
            showdocument            platform        displays the contents of a document
            structuremerger         platform        merges 2 disk indices
            trec_eval               platform        runs the NIST standard trec_eval tool
    

    5. Querying

    We can use the terrier interactive command to query the index for results.

    > terrier interactive
    
    14:33:44.639 [main] INFO  o.t.s.BaseCompressingMetaIndex - Structure meta reading lookup file into memory
    14:33:44.646 [main] INFO  o.t.s.BaseCompressingMetaIndex - Structure meta loading data file into memory
    
    terrier query>

    Once the query prompt is displayed, you can enter some queries for test. For example, let’s test with study of temperature.

    terrier query> study of temperature
    14:38:20.625 [main] INFO  org.terrier.querying.LocalManager - Starting to execute query interactive4 - study of temperature
    ...
    14:38:20.652 [main] INFO  org.terrier.querying.LocalManager - Finished executing query interactive4 in 27ms - 462 results retrieved
    
            Displaying 1-462 results
    0 549 5.794936876261285
    1 865 5.491204121605947
    2 29 5.282352789647487
    3 90 5.080041785645402
    4 1269 5.04726774599607
    5 1314 4.77766074135166
    6 912 4.72547801712598
    7 836 4.719158743858431
    8 407 4.4903944412638035
    9 585 4.449015692155355
    10 293 4.294205418924092
    11 967 4.231562902060937
    12 550 4.231495541986411
    ...
    

    In responding to the query study of temperature, Terrier estimated document having id nΒ° 549 to be most relevant, scoring 5.79.

    NΒ° 549 was recorded from the <docno> tag of the corresponding document.

    To quit the querying prompt, just enter “exit” as query term.

    Terrier supports a specific query language to exclude and weight terms. Find details in query language documentation.

    6. What about stop words and stemming ?

    Terrier use a term pipeline specified in terrier.properties under etc folder.

    The default term pipeline is configured to use Stopwords and PorterStemmer as follows in terrier.properties (line 29)

    ...
    #the processing stages a term goes through
    termpipelines=Stopwords,PorterStemmer
    ...

    On the one hand, the list of English stop words that will be ignored in retrieval task is contained in stopword-list.txt file under share folder.

    Some words listed in this file : and, then, of, are, by, etc.

    When using other language, consider changing this list with an appropriate one. Many stopwords lists supporting other languages could be found freely from Internet.

    On the other hand, stemming helps to truncate terms by removing prefixes and suffixes. This is applied in both indexing and retrieval steps. PorterStemmer is an example of stemming algorithms (SnowballStemmer is another commonly used one).

    In conclusion, if we focus on the used query study of temperature in previous section, it will have exactly the same result of studying temperature. In fact, of is considered as a stop word and studying will be stemmed by removing the ING suffix to have the same stem of study.

    πŸ’‘ All modern information retrieval systems implement such strategy.

    7. Batch Retrieval

    7.1. Configuration

    First of all, we have to do some configuration. Much of Terrier’s functionality is controlled by properties.

    You can pre-set these in the etc\terrier.properties file, or specify each on the command-line.

    Some commonly used properties have short options to set these on the command-line (use terrier help <command> to see these).

    To perform retrieval and evaluate the results of a batch of queries, we need to know:

    • The location of the queries (also known as topic files) – specified using the trec.topics property, or the short -t command-line option to batchretrieval.

    • The weighting model (e.g. TF_IDF) to use – specified using trec.model property or the -w option to batchretrieval. The default weighting model for Terrier is DPH.

    • The corresponding relevance assessments file (or qrels) for the topics – specified by trec.qrels or -q option to batchevaluate.

    In order to simplify commands options in the following steps, we will add two configurations of queries (topics) and relevance assessments (qrels) that will not be changed in etc\terrier.properties file as follows

    (Lines to be added in etc\terrier.properties file)

    trec.topics=C:\\cranfield\\cran.qry.xml
    trec.qrels=C:\\cranfield\\cranqrel.trec.txt
    

    Pay attention to put double \\ in paths to despecialize the backslash character.

    7.2. Retrieval

    The batchretrieval command instructs Terrier to do a batch retrieval run, i.e. retrieving the documents estimated to be the most relevant for each query in the topics file. This trec.topics property is already added in terrier.properties file by referring the end of the previous section.

    Run batch retrieval (it would take some seconds to complete):

      > terrier batchretrieval 
        ...
     15:30:34.669 [main] INFO  o.t.matching.PostingListManager - Query 365 with 10 terms has 10 posting lists
    15:30:34.676 [main] INFO  org.terrier.querying.LocalManager - running process PostFilterProcess
    15:30:34.677 [main] INFO  org.terrier.querying.LocalManager - running process SimpleDecorateProcess
    15:30:34.677 [main] INFO  org.terrier.querying.LocalManager - Finished executing query 365 in 11ms - 996 results retrieved
    15:30:34.678 [main] INFO  o.t.a.batchquerying.TRECQuerying - Time to process query 365: 0.012
    15:30:34.708 [main] INFO  o.t.a.batchquerying.TRECQuerying - Settings of Terrier written to C:\terrier-project-5.5\var\results/DPH_0.res.settings
    15:30:34.717 [main] INFO  o.t.a.batchquerying.TRECQuerying - Finished topics, executed 225 queries in 5.37 seconds, results written to C:\terrier-project-5.5\var\results/DPH_0.res
    

    This will result in a .res file in the var\results directory called DPH_0.res. We call each .res file a run, and contains Terrier’s answers to each of the 225 queries.

    Sample of .res file :

    TOPIC EXTRA COLUMN DOCNO RANK RELEVANCE SCORE MODEL
    1 Q0 51 0 19.800718637881634 DPH
    1 Q0 486 1 18.67288171863144 DPH
    1 Q0 12 2 16.191100749192266 DPH
    1 Q0 184 3 15.211695868416541 DPH
    1 Q0 878 4 14.29537290863017 DPH
    1 Q0 665 5 12.13011101490366 DPH
    1 Q0 746 6 11.926074045474698 DPH
    1 Q0 573 7 11.461285434379176 DPH
    1 Q0 141 8 11.264373653439682 DPH

    7.3. Changing weighting model

    You can also configure more options on the command-line, including arbitrary properties using the -D option to any Terrier command. So the following two commands are equivalent:

      > terrier batchretrieval -w BM25
      > terrier batchretrieval -Dtrec.model=BM25

    We have instructed Terrier to perform retrieval using the BM25 weighting model.

    BM25 is a classical Okapi model firstly defined by Stephen Robertson, instead of the default DPH, which is a Divergence From Randomness weighting model.

    After running batchretrieval for the second weighting model, we will have a second .res file in the var\results directory called BM25_1.res.

    All runs names are incremented using a counter system (in querycounter file under var\results). We can run as many runs before performing evaluation by changing weighting models and tuning parameters of each one.

    Other parameters of weighting models can be specified with options (see further details about configuring retrieval in Terrier documentation)

    7.4. Evaluation

    Now we will evaluate the obtained results by using the batchevaluate command :

      > terrier batchevaluate
    
    16:10:10.606 [main] INFO  o.t.evaluation.TrecEvalEvaluation - Evaluating result file: C:\terrier-project-5.5\var\results/BM25_1.res
    Average Precision: 0.0148
    16:10:11.359 [main] INFO  o.t.evaluation.TrecEvalEvaluation - Evaluating result file: C:\terrier-project-5.5\var\results/DPH_0.res
    Average Precision: 0.0132
    

    Terrier will look at the var\results directory, evaluate each .res file and save the output in a .eval file named exactly the same as the corresponding .res file.

    C:\terrier-project-5.5\var\results
                                β”œβ”€β”€β”€BM25_1.eval
                                β”œβ”€β”€β”€BM25_1.res
                                β”œβ”€β”€β”€BM25_1.res.settings
                                β”œβ”€β”€β”€DPH_0.eval
                                β”œβ”€β”€β”€DPH_0.res
                                β”œβ”€β”€β”€DPH_0.res.settings
                                └───querycounter

    Now we can look at all the Mean Average Precision (MAP) values of the runs by inspecting the .eval files in var\results:

    runid	all	DPH
    num_q	all	152
    num_ret	all	129408
    num_rel	all	1074
    num_rel_ret	all	699
    map	all	0.0132
    gm_map	all	0.0024
    Rprec	all	0.0132
    bpref	all	0.4007
    recip_rank	all	0.0336
    iprec_at_recall_0.00	all	0.0382
    iprec_at_recall_0.10	all	0.0351
    iprec_at_recall_0.20	all	0.0235
    iprec_at_recall_0.30	all	0.0164
    iprec_at_recall_0.40	all	0.0149
    iprec_at_recall_0.50	all	0.0130
    iprec_at_recall_0.60	all	0.0100
    iprec_at_recall_0.70	all	0.0071
    iprec_at_recall_0.80	all	0.0056
    iprec_at_recall_0.90	all	0.0039
    iprec_at_recall_1.00	all	0.0028
    P_5	all	0.0132
    P_10	all	0.0125
    P_15	all	0.0118
    P_20	all	0.0125
    P_30	all	0.0112
    P_100	all	0.0080
    P_200	all	0.0069
    P_500	all	0.0056
    P_1000	all	0.0046
    

    The above displayed evaluation measures are averaged over a batch of queries. Results seem to be not satisfying having a very low Mean Average Precision = 0.0132. Thus we should investigate more these results by running evaluation per query apart.

    We can obtain per-query results by using option -p in the command line:

      > terrier batchevaluate -p
    

    Old .eval should be deleted before running -p option.

    The resulting output saved in the corresponding .eval files will contain further results per query, with the middle column indicating the query id.

    Let’s consider the query nΒ°1 (see cran.qry.xml)

    <top>
    <num> 1</num> 
    <title>
    what similarity laws must be obeyed when constructing aeroelastic models of heated high speed aircraft .
    </title>
    </top>

    This is a sample of query nΒ°1 in the generated .eval per query file

    num_ret	1	825
    num_rel	1	28
    num_rel_ret	1	25
    map	1	0.2334
    Rprec	1	0.3214
    bpref	1	0.0357
    recip_rank	1	1.0000
    iprec_at_recall_0.00	1	1.0000
    iprec_at_recall_0.10	1	0.7500
    iprec_at_recall_0.20	1	0.3684
    iprec_at_recall_0.30	1	0.3214
    iprec_at_recall_0.40	1	0.2727
    iprec_at_recall_0.50	1	0.1562
    iprec_at_recall_0.60	1	0.0960
    iprec_at_recall_0.70	1	0.0781
    iprec_at_recall_0.80	1	0.0413
    iprec_at_recall_0.90	1	0.0000
    iprec_at_recall_1.00	1	0.0000
    P_5	1	0.6000
    P_10	1	0.3000
    P_15	1	0.3333
    P_20	1	0.3500
    P_30	1	0.3000
    P_100	1	0.1500
    P_200	1	0.0850
    P_500	1	0.0420
    P_1000	1	0.0250
    

    We can see that this query returned 25 relevant documents all over 28 relevant documents judged in qrels file. The Mean Average Precision (MAP) of this query is 0.2334. This query example performs well for different precision/recall points (see iprec_at_recall_ data that can be used to trace Precision-Recall curve).

    However, some queries have no relevant returned documents (in some cases relevant documents are returned in last ranks) which leads to a low overall MAP value.

    8. References

    9. Credits

    If you find this tutorial useful, consider sharing/citing and starring it on Github.

    Visit original content creator repository https://github.com/oussbenk/information-retrieval-tutorial-terrier-cranfield
  • hold-on

    Current Version NPM Minified size Github Code Size Downloads/Year Issues License Contributors

    NPM

    Hold-on

    Use case

    This package can be used in this scenario

    1. You have a costly function: time consuming, heavy CPU or IO usage
    2. You need to perform that function frequently
    3. The result of your function can change over time
    4. You can tolerate some -configurable- inconsistency
    5. You want to optimize that process

    How it works

    It stores in memory the result of your function for immediate access, and clears that memory after a specified time. It returns a function that can be used instead your original one.

    const hold = require('@wjsc/hold-on');
    const myOptimizedFunction = hold(<Your Function>, <Time in miliseconds>);
    myOptimizedFunction();
    

    Usage

    1. First example

    const hold = require('@wjsc/hold-on');
    
    // Define your costly function: Let's supose it's so heavy!
    const myFunction = () => new Date(); 
    
    // Make a new version of your function with 500 ms cache
    const myOptimizedFunction = hold(myFunction, 500);
    
    // This code will execute new Date() only once
    for(let i = 0; i<50; i++){
        // And it prints always the same date
        console.log(myOptimizedFunction());
    }
    

    2. Second example: Retrieving a remote resource

    const hold = require('@wjsc/hold-on');
    // Any HTTP client
    const fetch = require('node-fetch');
    
    const myFunction = () => fetch('https://httpstat.us/200')
                             .then(res => res.text());
    const myOptimizedFunction = hold(myFunction, 5000);
    
    // This code will execute the HTTP GET only once
    for(let i = 0; i<50; i++){
        myOptimizedFunction()
        .then(console.log);
    }
    // If you call the function after 5000 ms
    // the request will be executed again
    
    

    3. Third example: Cache file from local storage

    const hold = require('@wjsc/hold-on');
    const fs = require('fs');
    const myFunction = () => new Promise((resolve, reject) => {
        fs.readFile('./my-file', 'utf8', (err, data) => 
            err ? reject(err) : resolve(data)
        )
    })
    const myOptimizedFunction = hold(myFunction, 5000);
    myOptimizedFunction().then(console.log);
    
    

    4. Fourth example: It’s also great to cache a file from a remote Storage such as S3

    const hold = require('@wjsc/hold-on');
    const aws = require('aws-sdk');
    aws.config.update({ 
        secretAccessKey: 'ABCDE',
        accessKeyId: '12345'
    
    })
    const s3 = new aws.S3();
    
    const myFunction = () => {
        return new Promise((resolve, reject) => {
            s3.getObject({
                Bucket: 'my-bucket',
                Key: 'my-file'
            }, (err, data) => {
                if ( err ) reject(err)
                else resolve(data.Body.toString())
            })
        })
    }
    const myOptimizedFunction = hold(myFunction, 5000);
    myOptimizedFunction().then(console.log);
    
    

    100% Tested Library

    Every line of code is tested https://github.com/wjsc/hold-on/blob/master/test/index.test.js

    Tiny size

    Less than 20 lines of code and no dependencies

    Advanced

    How to force termination

    This function uses setTimeout to clear the internal cache. In some cases, you may need to clear this timer. This can be usefull if you are running a script that doesn’t end at desired time, or if you want to terminate a background timer.

    const myFunction = () => {};
    const myOptimizedFunction = hold(myFunction, 100000000);
    clearInterval(myOptimizedFunction.interval);
    

    How to clear the memory cache

    Just use the original function, or create a new function version.

    Package name reference: https://www.youtube.com/watch?v=WPnOEiehONQ

    Visit original content creator repository https://github.com/wjsc/hold-on
  • zcached

    zcached – A Lightweight In-Memory Cache System

    Welcome to zcached, a nimble and efficient in-memory caching system resembling databases like Redis. This README acts as a comprehensive guide, aiding in comprehension, setup, and optimal utilization.

    zig tests build

    Introduction

    zcached aims to offer rapid, in-memory caching akin to widely-used databases such as Redis. Its focus lies in user-friendliness, efficiency, and agility, making it suitable for various applications requiring swift data retrieval and storage.

    Crafted using Zig, a versatile, modern, compiled programming language, zcached prides itself on a zero-dependency architecture. This unique feature enables seamless compilation and execution across systems equipped with a Zig compiler, ensuring exceptional portability and deployment ease.

    Features

    • Zero-Dependency Architecture: Entirely built using Zig, ensuring seamless execution across systems with a Zig compiler, enhancing portability (except openssl, but it’s optional).
    • Lightweight Design: Engineered for efficiency, zcached boasts a small memory footprint and minimal CPU usage, optimizing performance while conserving resources.
    • Optimized Efficiency: Prioritizing swift data handling, zcached ensures prompt operations to cater to diverse application needs.
    • Diverse Data Type Support: Accommodates various data structures like strings, integers, floats, and lists, enhancing utility across different use cases.
    • Evented I/O and Multithreading: Leveraging evented I/O mechanisms and multithreading capabilities, zcached efficiently manages concurrent operations, enhancing responsiveness and scalability.
    • TLS Support: Ensures secure data transmission with encryption, protecting data integrity and confidentiality during client-server communication.

    Usage

    While zcached lacks a CLI, you can utilize nc (netcat) from the terminal to send commands to the server.

    SET

    Set a key to hold the string value. If key already holds a value, it is overwritten, regardless of its type.

    echo "*3\r\n\$3\r\nSET\r\n\$9\r\nmycounter\r\n:42\r\nx03" | netcat -N localhost 7556
    echo "*3\r\n\$3\r\nSET\r\n\$9\r\nmycounter\r\n%2\r\n+first\r\n:1\r\n+second\r\n:2\r\nx03" | netcat -N localhost 7556

    Command Breakdown:

    • *3\r\n – number of elements in the array (commands are always arrays)
    • \$3\r\nSET\r\n$3 denotes the following string as 3 bytes long, SET is the command
    • \$9\r\nmycounter\r\n$9 means that the next string is 9 bytes long, mycounter is the key
    • :42\r\n: indicates the next string is a number, 42 is the value

    GET

    Retrieve the value of a key. If the key doesn’t exist, -not found is returned. GET only accepts strings as keys.

    echo "*2\r\n\$3\r\nGET\r\n\$9\r\nmycounter\r\n\x03" | netcat -N localhost 7556

    PING

    Returns PONG. This command is often used to test if a connection is still alive, or to measure latency.

    echo "*1\r\n\$4\r\nPING\r\n\x03" | netcat -N localhost 7556

    Running Tests

    Run the tests using zig in the root directory of the project:

    zig build test

    Documentation

    Contributing

    Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

    Visit original content creator repository https://github.com/sectasy0/zcached
  • BotanicalLens

    Botanical Lens

    Plant Recognition Web Application

    Project Overview

    Botanical Lens is a progressive web application designed to help users record and identify plants. Users can add new plant sightings, view plants added by themselves or other users, and comment on sightings. The application leverages Node.js, Express, MongoDB, and integrates with the DBPedia knowledge graph for plant identification.

    Features

    • View Plants: Sorted by date/time seen and identification status. Optionally sort by distance.
    • Add Plant Sightings: Includes date/time, location, description, size, characteristics, identification, photo, and user’s nickname.
    • Plant Details & Chat: Detailed view with public chat for each sighting, allowing real-time discussions.
    • Offline Support: Create and manage sightings and chats while offline, with data synchronization upon reconnection.

    Demo

    Watch the demo video here.

    Initial Wireframe

    View the initial wireframe here.

    Installation and Setup

    Prerequisites

    • Node.js (version 14.x or higher)
    • MongoDB (local or remote instance)

    Steps to Run the Application

    1. Clone the repository:

      https://github.com/Vivek-Tate/BotanicalLens.git
      cd TeamMsc12
    2. Move to the solution folder:

      cd solution
    3. Install the dependencies:

      npm install
    4. Start the application:

      npm start
    5. Access the application:
      Open your browser and navigate to http://localhost:3000.

    Code Structure

    Web Application

    • Frontend: EJS, JavaScript, HTML5, CSS
    • Backend: Node.js, Express
    • Real-time Chat: Socket.io
    • Database: MongoDB for plant and chat data, IndexedDB for offline storage

    Key Functionalities

    1. Adding Plant Sightings: Form to submit plant details, location, photo, and identification status.
    2. Viewing Plant Sightings: List and detail view with sorting and filtering options.
    3. Chat System: Real-time chat for each plant sighting, with offline message support.
    4. Offline Functionality: Store new sightings and chats locally, sync when online.

    DBPedia Integration

    • Fetch plant information using SPARQL queries.
    • Display common name, scientific name, description, and URI from DBPedia in the UI.

    Documentation and Code Quality

    • Inline Comments: Descriptions within the code for clarity.
    • Higher-level Documentation: Detailed comments and documentation files explaining the codebase.
    • GitHub Commit History: Track the progress and contributions on GitHub.

    Screenshots and Videos

    • Include screenshots or a video demo in the <MainDirectory>/Screenshots folder.

    License

    See the LICENSE file for details.


    Feel free to reach out via the project’s GitHub repository for any issues or contributions. Enjoy exploring BotanicalLens!

    Visit original content creator repository
    https://github.com/Vivek-Tate/BotanicalLens

  • dropi-replic

    Dropi.co – Replic whit ReactJS

    Preview

    Dropie.co

    This project aims to replicate the existing website at https://dropi.co using advanced technologies such as ReactJS. The main objective is to transform the page into a Single Page Application (SPA) to improve the user experience and maintain visual and functional fidelity as close as possible to the original.

    Warning

    The project is carried out for educational purposes.

    Note

    Developed with React JS, CSS, HTML, Vite

    ΒΏWhy react?

    React is known as one of the strongest JavaScript libraries, it has great support as well as being able to create great applications.

    ΒΏWhy JavaScript?

    JavaScript is one of the most important languages in the frontend world, with a wide variety of frameworks and libraries that make it easy to create user interfaces.

    Currently the project is approximately 60% developed, it has the main functions of the real page.

    React + Vite

    Tip

    If you clone the repository, install the necessary dependencies and run npm run dev in the terminal to start the project.

    This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.

    Currently, two official plugins are available:

    Visit original content creator repository https://github.com/Deyverson1/dropi-replic
  • scansreader

    [Legacy] Scansreader

    Warning: This is legacy code, not maintained anymore, and depending on a discontinued QIV library by Adam Kopacz. It may however be ported to the new QIV version?

    Scansreader is a “do the right thing” lean and mean unix/linux X C program to read a set of images scanned from printed documents, such as comics book, as confortably as possible. This page is at https://colas.nahaboo.net/Code/ScansReader

    Goals

    Scansreader was done to:

    • Be safe, with no functionality to accidentaly modify or delete the images on disk.
    • Be usable, with only adapted functionalities for the task (no window mode, only fullscreen full width mode), with streamlined interaction (scroll only vertically, without clicking the mouse), automatic bookmarks in unlimited numbers…
    • Be fast, pure C, based on the fast QIV general image viewer, with added prefetching of next image.

    Philosophy

    When reading a book, I do not want to be bothered having to get to the mouse or touchpad, especially when potato-couching with my notebook, and have to aim for menus, scrollbars, buttons. Thus scansreader do not use any of this interaction method, and is always fullscreen.

    It also do not provide things that belong to generic image manipulation programs like the gimp: no thumbnails, no 2-page mode, no dialog box to open files (you just run scansreader via your favorite desktop program).

    Why the “S” in Scansreader instead of the more english sounding Scanreader? Well, there was no google hit with scansreader, but scanreader was already the name of many products… so I went with the less common name.

    Implementation

    I could not find a program satisfying these criteria. All involved too much interaction or option-setting to really read scans as painlessly as a real book.

    I stumbled on a nice and fast linux image viewer QIV by Adam Kopacz, and decided to use it as the base for scansreader, removing a lot of its code and adding functionalities. It makes use of the X, gdk and imlib libraries.

    More details in the README.txt and HISTORY.txt files.

    License

    GPLv3

    Download & Installation

    Just compile it (make) and, install (make install) or copy the compiled scansreader executable in your path, e.g. in /usr/local/bin.

    User manual

    Just run scansreader. it will read all the images in the current directory, (or the directory or zip, cbz, rar or cbr archive given as argument), and open the first one full screen, resized to the full width of the screen.

    • the -l option will re-open the last document that was opened the last time we quit the program.
    • -L fr will use the French language for online help texts.
    • –help prints the full usage, with more options

    Then, you can read your document with the following commands:

    • moving the mouse will scroll the document, as using the mousewheel, or pressing the up / down arrow keys will only scroll vertically, and stop at the top and bottom of the image, indicating the end with a small yellow triangle at the bottom right or top left of the image.
    • SPACE will “do the right thing”: move down the image, then go to the next one once at the bottom. If the image has been zoomed in, and is wider than the screen, goes first to the right then down to the left, etc…
    • left-click or right arrow key will go to next page,
    • right-click or left arrow key will go to previous page,
    • q or Escape will quit, appending the current position to the ~/.scansreader.log file
    • p and n will go to the previous or next saved place
    • 1,2,…,9,0 Goes to image at 0, 10, 20, …, 90, 100% of current set.
    • ? (or any other non-understood key) displays online help, with more commands

    Other scans and comics readers

    Scansreader can be used to read digitized comics. You may want to look at similar programs:

    History

    • 2021-12-13 moved to GitHub fopr reference, as I remove the the mercurial site

    • 2009-01-18, Wiki page + HG soureces on my public mercurial site

    • v1.14

      2006-03-05, nothing changed in the code, only the packaging:

      • the .tgz includes now a directory
      • compilation is now done on debian sarge (was woody)
      • static executable is no longer included in distrib, but in a separate tgz for a faster download of the distrib
    • v1.13

      2005-02-16,

      • no more “random” option
      • TAB command to toggle between full width and full screen
      • next image is prefetched and decoded while showing the current one for speed
      • Shift-Q now quits without saving in history
    • v1.12

      2005-02-13,

      • can decode on the fly zip and rar archives
    • v1.11 2004-05-09, bugfix for prev/next

    • v1.10 2004-03-12, history file format change

    • v1.9 2003-12-07, p and n navigates in history

    • v1.8 2003-12-05,

    • v1.7 2003-11-03, scrolls with mouse motions only, no need to click

    • v1.6 2003-11-02, french help, indicator when at top or bottom

    • v1.5 2003-11-01,

    • v1.4 2003-10-29,

    • v1.3 2003-07-02, removed windowed mode

    • v1.2 2003-07-01, first usable release

    Testimonies

    “Astor” wrote (in French) on some forum after finding scansreader:

    Fran,cais: fr.png

    scansreader est absolument gΓ©nial et totalement parfait, c’est exactement ce qu’il me fallait !

    C’est lΓ©ger, rapide, puissant, ergonomique, accessible, pratique, efficace, et en plus Γ§Γ  marche bien. Rien que Γ§Γ . J’ai essayΓ© sous Linux une tonne d’autres softs, mais aucun ne m’a jamais convenu, certains sont Γ©crits en Java avec les pieds et mettent 20 secondes sur ma machine Γ  afficher la premiΓ¨re page d’un .cbz (scanreader est instantanΓ©), d’autres ont une interface envahissante et discutable (scansreader est plein Γ©cran, comme CDisplay, et pour moi c’est l’idΓ©al), certains sont compliquΓ©s Γ  installer (bon c’est bien le java mais livrer juste les .class et vas y pour la commande de lancement trouve tout seul les .jar en librairies qu’il faut, non merci).

    Bref, un must. Merci beaucoup Γ  l’auteur.

    Merci aussi pour la version compilΓ©e en static, Γ§Γ  m’a bien aidΓ©.

    Here is my translation in English: us.png

    scansreader is absolutely awesome and totally perfect, it is exactly what I needed!

    It is light, fast, powerful, usable, accessible, handy, efficient, and moreover it works well. Nothing less. I tried many other programs, but I was satisfied by none, some are written in bad Java and take 20 seconds to show the first page of a .cbz (scansreader is instantaneous), other have a bulky and dubious interface (scansreader, like CDisplay, is fullscreen, and for me it is the ideal choice), some are complex to install (java is great, but providing only the .class files and let the user figure the magic command to launch to find the relevant .jar libraries, no thanks).

    In a nutshell, a must have. Thanks to the author.

    Thanks also for the statically compiled version, it helped me a lot.

    To be honest, “Astor” later found a shortcoming of scansreader: it uses the QIV image scaling functions that are fast but of low quality and can make noticeable artifacts (jagged lines) on some kind of pages.

    Visit original content creator repository https://github.com/ColasNahaboo/scansreader
  • zedio

    Zedio

    C++23 Platform

      ______  ______   _____    _____    ____  
     |___  / |  ____| |  __ \  |_   _|  / __ \ 
        / /  | |__    | |  | |   | |   | |  | |
       / /   |  __|   | |  | |   | |   | |  | |
      / /__  | |____  | |__| |  _| |_  | |__| |
     /_____| |______| |_____/  |_____|  \____/ 
                                                                           
    

    Documentation: https://8sileus.github.io/zedio/

    Zedio is an event-driven header library for writing asynchronous applications in modern C++:

    Feature:

    • Multithreaded, work-stealing based task scheduler. (reference tokio)
    • Proactor event handling backed by io_uring.
    • Zero overhead abstraction, no virtual, no dynamic

    Sub library:

    • I/O
    • NetWorking
    • FileSystem
    • Time
    • Sync
    • Log

    It’s being developed, if you’re interested in zedio and want to participate in its development, see contributing

    Example

    // An echo server
    // Ignore all errors
    #include "zedio/core.hpp"
    #include "zedio/net.hpp"
    
    using namespace zedio;
    using namespace zedio::async;
    using namespace zedio::net;
    
    auto process(TcpStream stream) -> Task<void> {
        char buf[1024]{};
        while (true) {
            auto len = (co_await (stream.read(buf))).value();
            if (len == 0) {
                break;
            }
            co_await stream.write_all({buf, len});
        }
    }
    
    auto server() -> Task<void> {
        auto addr = SocketAddr::parse("localhost", 9999).value();
        auto listener = TcpListener::bind(addr).value();
        while (true) {
            auto [stream, addr] = (co_await listener.accept()).value();
            spawn(process(std::move(stream)));
        }
    }
    
    auto main() -> int {
        // zedio::runtime::CurrentThreadBuilder::default_create().block_on(server());
        zedio::runtime::MultiThreadBuilder::default_create().block_on(server());
    }
    Visit original content creator repository https://github.com/8sileus/zedio
  • ResNet-from-Scratch

    ResNet Implementation from Scratch using PyTorch

    This repository contains an implementation of the Residual Network (ResNet) architecture from scratch using PyTorch. ResNet is a deep convolutional neural network that won the ImageNet competition in 2015 and introduced the concept of residual connections to address the problem of vanishing gradients in very deep networks.

    Table of Contents

    Overview

    ResNet is a highly influential architecture that allows the training of very deep neural networks by introducing residual blocks. These blocks use skip connections (also known as identity mappings) to allow gradients to flow through the network more easily, mitigating the vanishing gradient problem that occurs when training deep networks.

    ResNet Architecture

    Model Values

    Architecture

    This implementation supports the following ResNet variants:

    • ResNet-50
    • ResNet-101
    • ResNet-152

    Each variant differs in the number of layers (blocks) used in the network:

    • ResNet-50: 50 layers deep (3, 4, 6, 3 blocks per layer)
    • ResNet-101: 101 layers deep (3, 4, 23, 3 blocks per layer)
    • ResNet-152: 152 layers deep (3, 4, 36, 3 blocks per layer)

    The basic building block of ResNet is a residual block, which consists of three convolutional layers with batch normalization and ReLU activation functions. The key feature is the skip connection that bypasses the block, adding the input directly to the output, which helps in training deep networks by preserving gradient flow.

    Residual Block Architecture

    Code Explanation

    Residual Block

    The Block class defines a residual block. Each block contains:

    1. A 1×1 convolution layer for reducing the dimensionality.
    2. A 3×3 convolution layer for processing the feature map.
    3. A 1×1 convolution layer for restoring the dimensionality.
    4. Batch normalization and ReLU activation after each convolution.
    5. An optional identity downsample layer, used when the input and output dimensions do not match.
    class Block(nn.Module):
        ...

    ResNet Class

    The ResNet class defines the full network architecture. It begins with a standard convolutional layer and a max-pooling layer, followed by four main layers, each containing several residual blocks. The network ends with an average pooling layer and a fully connected layer for classification.

    class ResNet(nn.Module):
        ...

    ResNet Variants

    Three functions are provided to create different versions of the ResNet architecture:

    • resnet50()
    • resnet101()
    • resnet152()
    def resnet50(img_channels=3, num_classes=1000):
        return ResNet([3, 4, 6, 3], img_channels, num_classes)
    
    def resnet101(img_channels=3, num_classes=1000):
        return ResNet([3, 4, 23, 3], img_channels, num_classes)
    
    def resnet152(img_channels=3, num_classes=1000):
        return ResNet([3, 4, 36, 3], img_channels, num_classes)

    Usage

    To run the network, simply execute the main() function in the script. This will create an instance of the ResNet-152 model, pass a random tensor through it, and print the output size.

    First you need to install PyTorch:

    pip install torch

    then you can run the following command:

    python ResNet.py

    This indicates that the model has processed two input images (batch size = 2) and produced a vector of size 1000 for each image, corresponding to the 1000 classes in the dataset.

    References

    Visit original content creator repository https://github.com/matin-ghorbani/ResNet-from-Scratch
  • ResNet-from-Scratch

    ResNet Implementation from Scratch using PyTorch

    This repository contains an implementation of the Residual Network (ResNet) architecture from scratch using PyTorch. ResNet is a deep convolutional neural network that won the ImageNet competition in 2015 and introduced the concept of residual connections to address the problem of vanishing gradients in very deep networks.

    Table of Contents

    Overview

    ResNet is a highly influential architecture that allows the training of very deep neural networks by introducing residual blocks. These blocks use skip connections (also known as identity mappings) to allow gradients to flow through the network more easily, mitigating the vanishing gradient problem that occurs when training deep networks.

    ResNet Architecture

    Model Values

    Architecture

    This implementation supports the following ResNet variants:

    • ResNet-50
    • ResNet-101
    • ResNet-152

    Each variant differs in the number of layers (blocks) used in the network:

    • ResNet-50: 50 layers deep (3, 4, 6, 3 blocks per layer)
    • ResNet-101: 101 layers deep (3, 4, 23, 3 blocks per layer)
    • ResNet-152: 152 layers deep (3, 4, 36, 3 blocks per layer)

    The basic building block of ResNet is a residual block, which consists of three convolutional layers with batch normalization and ReLU activation functions. The key feature is the skip connection that bypasses the block, adding the input directly to the output, which helps in training deep networks by preserving gradient flow.

    Residual Block Architecture

    Code Explanation

    Residual Block

    The Block class defines a residual block. Each block contains:

    1. A 1×1 convolution layer for reducing the dimensionality.
    2. A 3×3 convolution layer for processing the feature map.
    3. A 1×1 convolution layer for restoring the dimensionality.
    4. Batch normalization and ReLU activation after each convolution.
    5. An optional identity downsample layer, used when the input and output dimensions do not match.
    class Block(nn.Module):
        ...

    ResNet Class

    The ResNet class defines the full network architecture. It begins with a standard convolutional layer and a max-pooling layer, followed by four main layers, each containing several residual blocks. The network ends with an average pooling layer and a fully connected layer for classification.

    class ResNet(nn.Module):
        ...

    ResNet Variants

    Three functions are provided to create different versions of the ResNet architecture:

    • resnet50()
    • resnet101()
    • resnet152()
    def resnet50(img_channels=3, num_classes=1000):
        return ResNet([3, 4, 6, 3], img_channels, num_classes)
    
    def resnet101(img_channels=3, num_classes=1000):
        return ResNet([3, 4, 23, 3], img_channels, num_classes)
    
    def resnet152(img_channels=3, num_classes=1000):
        return ResNet([3, 4, 36, 3], img_channels, num_classes)

    Usage

    To run the network, simply execute the main() function in the script. This will create an instance of the ResNet-152 model, pass a random tensor through it, and print the output size.

    First you need to install PyTorch:

    pip install torch

    then you can run the following command:

    python ResNet.py

    This indicates that the model has processed two input images (batch size = 2) and produced a vector of size 1000 for each image, corresponding to the 1000 classes in the dataset.

    References

    Visit original content creator repository https://github.com/matin-ghorbani/ResNet-from-Scratch
  • containers

    Container Collection & Automation Framework

    A comprehensive collection of Dockerfiles with advanced automation tools for multi-architecture container builds, metrics collection, AI-powered template assistance, and CI/CD workflows.

    GitHub Actions License Python Poetry

    Overview

    This repository provides a comprehensive container development ecosystem featuring:

    • Container Collection: Curated Dockerfiles for Node.js applications and archived Ethereum Parity client
    • Automation Framework: Python scripts for dynamic Dockerfile generation, metrics collection, and image tagging
    • 🧠 AI-Enhanced Template System: Intelligent template recommendations, project analysis, and automated parameter inference
    • Multi-Architecture Support: ARM64 and AMD64 builds with platform-specific optimizations
    • CI/CD Integration: GitHub Actions workflows for building, testing, vulnerability scanning, and publishing
    • Development Environment: Docker-in-Docker devcontainer setup with mise tool management

    ✨ Features

    • 🐳 Dynamic Dockerfile Generation: Multi-architecture support with customizable base images and packages
    • πŸ€– AI-Powered Template Intelligence: Smart recommendations, project analysis, and automated containerization
    • πŸ“Š Container Metrics Collection: Build time, image size, and registry usage analytics
    • 🏷️ Intelligent Image Tagging: Metadata-based tagging with semantic versioning
    • πŸ”’ Security Integration: Automated vulnerability scanning with Trivy
    • ⚑ Modern Tooling: Poetry, mise-en-place, pre-commit hooks, and automated dependency updates
    • πŸš€ CI/CD Workflows: Comprehensive GitHub Actions for container lifecycle management

    🧠 AI-Enhanced Features

    The container system now includes comprehensive AI capabilities:

    Template Intelligence

    # Get AI recommendations for your project
    poetry run containers ai recommend /path/to/your/project
    
    # Deep project analysis
    poetry run containers ai analyze /path/to/your/project --output analysis.md
    
    # Interactive AI assistant
    poetry run containers ai chat

    Smart Automation

    # Automatically infer template parameters
    poetry run containers ai infer apps/nodejs/express /my/project
    
    # AI-powered code review
    poetry run containers ai review /path/to/code
    
    # Generate intelligent documentation
    poetry run containers ai docs --template base/alpine --type readme

    Predictive Maintenance

    # Analyze template health and usage patterns
    poetry run containers ai maintenance --report
    
    # Get optimization suggestions
    poetry run containers ai maintenance --template base/alpine

    πŸ“š Complete AI Guide – Comprehensive documentation for AI features

    πŸ“¦ Available Containers

    Node.js Applications

    node/ – Production-ready Node.js container variants:

    • node/release/: Debian Bookworm-based (~160MB) – Full compatibility for production
    • node/alpine/: Alpine Linux-based (~70MB) – Optimized for size

    Archived Containers

    archived/parity/ – Ethereum Parity client containers:

    • branch/: Development builds from Git branches
    • release/: Stable production releases

    πŸš€ Quick Start

    Prerequisites

    • mise – Polyglot tool version manager
    • Docker – Container platform

    Environment Setup

    # Clone the repository
    git clone https://github.com/marcusrbrown/containers.git
    cd containers
    
    # Install development tools with mise
    mise install
    
    # Install Python dependencies
    poetry install
    
    # Install Node.js dependencies
    pnpm install
    
    # Setup pre-commit hooks
    pre-commit install

    Build a Container

    # Build Node.js Alpine variant
    docker build -t my-node-app:alpine node/alpine/
    
    # Build Node.js release variant
    docker build -t my-node-app:release node/release/

    πŸ› οΈ Automation Scripts

    This repository includes three powerful Python scripts accessible via Poetry:

    Generate Dockerfiles

    Create customized Dockerfiles with multi-architecture support:

    # Generate a basic Dockerfile
    poetry run generate-dockerfile --base-image debian:bullseye-slim --packages "curl wget python3"
    
    # Multi-architecture with environment variables
    poetry run generate-dockerfile \
      --base-image alpine:3.18 \
      --packages "nodejs npm" \
      --env "NODE_ENV=production PORT=3000" \
      --architecture "linux/amd64,linux/arm64"
    
    # Extend existing Dockerfile
    poetry run generate-dockerfile --existing-dockerfile ./node/alpine/Dockerfile --packages "git"

    Collect Container Metrics

    Gather build performance and image analytics:

    # Collect metrics for all containers
    poetry run collect-docker-metrics
    
    # Collect metrics for specific registry
    poetry run collect-docker-metrics --registry github
    
    # View collected metrics
    cat collected_metrics.yaml

    Generate Image Tags

    Create semantic tags based on container metadata:

    # Generate tags for all containers
    poetry run generate-image-tags
    
    # View generated tags
    cat generated_tags.json

    πŸ”§ Development Environment

    Local Development

    # Activate development environment
    mise shell
    
    # Run linting and formatting
    pre-commit run --all-files
    
    # Run specific script during development
    poetry run python scripts/generate_dockerfile.py --help

    VS Code DevContainer

    Open the repository in VS Code and use the “Reopen in Container” command for a complete Docker-in-Docker development environment with all tools pre-configured.

    βš™οΈ CI/CD Workflows

    This repository includes comprehensive GitHub Actions workflows:

    πŸ“š Project Structure

    β”œβ”€β”€ node/                     # Node.js container variants
    β”‚   β”œβ”€β”€ alpine/              # Alpine-based builds
    β”‚   └── release/             # Debian-based builds
    β”œβ”€β”€ archived/                # Legacy container definitions
    β”‚   └── parity/             # Ethereum Parity client
    β”œβ”€β”€ scripts/                 # Automation tools
    β”‚   β”œβ”€β”€ generate_dockerfile.py
    β”‚   β”œβ”€β”€ collect_docker_metrics.py
    β”‚   └── generate_image_tags.py
    β”œβ”€β”€ .devcontainer/          # VS Code development environment
    β”œβ”€β”€ .github/workflows/      # CI/CD automation
    └── docs/                   # Documentation
    

    🀝 Contributing

    Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

    Development Workflow

    1. Fork the repository
    2. Create a feature branch (git checkout -b feature/amazing-feature)
    3. Make your changes
    4. Run tests and linting (pre-commit run --all-files)
    5. Commit your changes (git commit -m 'Add amazing feature')
    6. Push to the branch (git push origin feature/amazing-feature)
    7. Open a Pull Request

    πŸ“„ License

    This project is licensed under the MIT License. See the LICENSE.md file for details.

    πŸ”— Links

    Visit original content creator repository https://github.com/marcusrbrown/containers