HEX
Server: Apache
System: Linux zacp120.webway.host 4.18.0-553.50.1.lve.el8.x86_64 #1 SMP Thu Apr 17 19:10:24 UTC 2025 x86_64
User: govancoz (1003)
PHP: 8.3.26
Disabled: exec,system,passthru,shell_exec,proc_close,proc_open,dl,popen,show_source,posix_kill,posix_mkfifo,posix_getpwuid,posix_setpgid,posix_setsid,posix_setuid,posix_setgid,posix_seteuid,posix_setegid,posix_uname
Upload Files
File: //usr/local/lib/python3.7/site-packages/pip/_vendor/pygments/__pycache__/lexer.cpython-37.pyc
B

L��g:��
@s�dZddlZddlZddlZddlmZmZddlmZddl	m
Z
mZmZm
Z
mZddlmZmZmZmZmZmZddlmZdd	d
ddd
dddddddg
Ze�d�ZdddddgZedd��ZGdd�de�ZGdd�ded �Z Gd!d�de �Z!Gd"d
�d
e"�Z#Gd#d$�d$�Z$e$�Z%Gd%d&�d&e&�Z'Gd'd(�d(�Z(d)d�Z)Gd*d+�d+�Z*e*�Z+d,d�Z,Gd-d�d�Z-Gd.d�de�Z.Gd/d0�d0e�Z/Gd1d	�d	e e/d �Z0Gd2d�d�Z1Gd3d
�d
e0�Z2d4d5�Z3Gd6d7�d7e/�Z4Gd8d9�d9e0e4d �Z5dS):z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�N)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
Whitespace�
_TokenType)�get_bool_opt�get_int_opt�get_list_opt�make_analysator�Future�guess_decode)�	regex_opt�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words�line_rez.*?
)szutf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16becCsdS)Ng�)�xrr�>/tmp/pip-unpacked-wheel-hv55ucu3/pip/_vendor/pygments/lexer.py�<lambda>"�r!c@seZdZdZdd�ZdS)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    cCs(d|krt|d�|d<t�||||�S)N�analyse_text)r
�type�__new__)Zmcs�name�bases�drrr r&+szLexerMeta.__new__N)�__name__�
__module__�__qualname__�__doc__r&rrrr r#%sr#c@s^eZdZdZdZgZgZgZgZdZ	dZ
dd�Zdd�Zdd	�Z
d
d�Zdd
d�Zdd�ZdS)rau
    Lexer for a specific language.

    See also :doc:`lexerdevelopment`, a high-level guide to writing
    lexers.

    Lexer classes have attributes used for choosing the most appropriate
    lexer based on various criteria.

    .. autoattribute:: name
       :no-value:
    .. autoattribute:: aliases
       :no-value:
    .. autoattribute:: filenames
       :no-value:
    .. autoattribute:: alias_filenames
    .. autoattribute:: mimetypes
       :no-value:
    .. autoattribute:: priority

    Lexers included in Pygments should have an additional attribute:

    .. autoattribute:: url
       :no-value:

    You can pass options to the constructor. The basic options recognized
    by all lexers and processed by the base `Lexer` class are:

    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    NrcKs�||_t|dd�|_t|dd�|_t|dd�|_t|dd�|_|�dd	�|_|�d
�pZ|j|_g|_	xt
|dd�D]}|�|�qrWd
S)a�
        This constructor takes arbitrary options as keyword arguments.
        Every subclass must first process its own options and then call
        the `Lexer` constructor, since it processes the basic
        options like `stripnl`.

        An example looks like this:

        .. sourcecode:: python

           def __init__(self, **options):
               self.compress = options.get('compress', '')
               Lexer.__init__(self, **options)

        As these options must all be specifiable as strings (due to the
        command line usage), there are various utility functions
        available to help with that, see `Utilities`_.
        �stripnlT�stripallF�ensurenl�tabsizer�encoding�guessZ
inencoding�filtersrN)�optionsr
r.r/r0rr1�getr2r4r�
add_filter)�selfr5�filter_rrr �__init__�szLexer.__init__cCs(|jrd|jj|jfSd|jjSdS)Nz<pygments.lexers.%s with %r>z<pygments.lexers.%s>)r5�	__class__r*)r8rrr �__repr__�s
zLexer.__repr__cKs&t|t�st|f|�}|j�|�dS)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr4�append)r8r9r5rrr r7�s
zLexer.add_filtercCsdS)a�
        A static method which is called for lexer guessing.

        It should analyse the text and return a float in the range
        from ``0.0`` to ``1.0``.  If it returns ``0.0``, the lexer
        will not be selected as the most probable one, if it returns
        ``1.0``, it will be selected immediately.  This is used by
        `guess_lexer`.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr)�textrrr r$�szLexer.analyse_textFc
s�t�t��s�jdkr$t��\�}n�jdkr�yddlm}Wn,tk
rj}ztd�|�Wdd}~XYnXd}x4tD],\}}��|�rv�t	|�d��
|d�}PqvW|dkr�|��dd��}	��
|	�d	�p�d
d�}|�n(��
�j����d��r"�t	d�d��n��d��r"�t	d�d����
dd
����
dd
���j�rL����n�j�r^��d
���jdk�rv���j���j�r���d
��s��d
7���fdd�}
|
�}|�s�t|�j��}|S)ae
        This method is the basic interface of a lexer. It is called by
        the `highlight()` function. It must process the text and return an
        iterable of ``(tokentype, value)`` pairs from `text`.

        Normally, you don't need to override this method. The default
        implementation processes the options recognized by all lexers
        (`stripnl`, `stripall` and so on), and then yields all tokens
        from `get_tokens_unprocessed()`, with the ``index`` dropped.

        If `unfiltered` is set to `True`, the filtering mechanism is
        bypassed even if filters are defined.
        r3�chardetr)r@zkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/N�replaceir2zutf-8uz
�
�
c3s(x"����D]\}}}||fVqWdS)N)�get_tokens_unprocessed)�_�t�v)r8r?rr �streamer�sz"Lexer.get_tokens.<locals>.streamer)r=�strr2r�pip._vendorr@�ImportError�
_encoding_map�
startswith�len�decode�detectr6rAr/�stripr.r1�
expandtabsr0�endswithrr4)r8r?Z
unfilteredrEr@�e�decoded�bomr2�encrH�streamr)r8r?r �
get_tokens�sN




zLexer.get_tokenscCst�dS)aS
        This method should process the text and return an iterable of
        ``(index, tokentype, value)`` tuples where ``index`` is the starting
        position of the token within the input text.

        It must be overridden by subclasses. It is recommended to
        implement it as a generator to maximize effectiveness.
        N)�NotImplementedError)r8r?rrr rDs	zLexer.get_tokens_unprocessed)F)r*r+r,r-r'�aliases�	filenamesZalias_filenames�	mimetypes�priority�urlr:r<r7r$rYrDrrrr r1s2
@)�	metaclassc@s$eZdZdZefdd�Zdd�ZdS)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    cKs0|f|�|_|f|�|_||_tj|f|�dS)N)�
root_lexer�language_lexer�needlerr:)r8Z_root_lexerZ_language_lexerZ_needler5rrr r:szDelegatingLexer.__init__cCs�d}g}g}xX|j�|�D]H\}}}||jkrR|rH|�t|�|f�g}||7}q|�|||f�qW|r||�t|�|f�t||j�|��S)N�)rbrDrcr>rN�
do_insertionsra)r8r?Zbuffered�
insertionsZ
lng_buffer�irFrGrrr rDs

z&DelegatingLexer.get_tokens_unprocessedN)r*r+r,r-rr:rDrrrr r
sc@seZdZdZdS)rzI
    Indicates that a state should include rules from another state.
    N)r*r+r,r-rrrr r4sc@seZdZdZdd�ZdS)�_inheritzC
    Indicates the a state should inherit from its superclass.
    cCsdS)Nrr)r8rrr r<?sz_inherit.__repr__N)r*r+r,r-r<rrrr rh;srhc@s eZdZdZdd�Zdd�ZdS)�combinedz:
    Indicates a state combined from multiple states.
    cGst�||�S)N)�tupler&)�cls�argsrrr r&Jszcombined.__new__cGsdS)Nr)r8rlrrr r:Mszcombined.__init__N)r*r+r,r-r&r:rrrr riEsric@sFeZdZdZdd�Zddd�Zddd�Zdd	d
�Zdd�Zd
d�Z	dS)�_PseudoMatchz:
    A pseudo match object constructed from a string.
    cCs||_||_dS)N)�_text�_start)r8�startr?rrr r:Wsz_PseudoMatch.__init__NcCs|jS)N)ro)r8�argrrr rp[sz_PseudoMatch.startcCs|jt|j�S)N)rorNrn)r8rqrrr �end^sz_PseudoMatch.endcCs|rtd��|jS)Nz
No such group)�
IndexErrorrn)r8rqrrr �groupasz_PseudoMatch.groupcCs|jfS)N)rn)r8rrr �groupsfsz_PseudoMatch.groupscCsiS)Nr)r8rrr �	groupdictisz_PseudoMatch.groupdict)N)N)N)
r*r+r,r-r:rprrrtrurvrrrr rmRs


rmcsd�fdd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    Nc3s�x�t��D]�\}}|dkrq
q
t|�tkrT|�|d�}|r�|�|d�||fVq
|�|d�}|dk	r
|r~|�|d�|_x.||t|�|d�|�|�D]}|r�|Vq�Wq
W|r�|��|_dS)N�)�	enumerater%r	rtrp�posrmrr)�lexer�match�ctxrg�action�data�item)rlrr �callbackqs"zbygroups.<locals>.callback)Nr)rlr�r)rlr rmsc@seZdZdZdS)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    N)r*r+r,r-rrrr r��sr�csji�d�kr:��d�}t|ttf�r.|�d<nd|f�d<�tkrTd��fdd�	}nd	���fdd�	}|S)
a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootNc3sn�r��|j�|jf��}n|}|��}x0|j|��f��D]\}}}||||fVq>W|rj|��|_dS)N)�updater5r;rprDrtrrry)rzr{r|�lx�srgrFrG)�	gt_kwargs�kwargsrr r��szusing.<locals>.callbackc3sb��|j��f��}|��}x0|j|��f��D]\}}}||||fVq2W|r^|��|_dS)N)r�r5rprDrtrrry)rzr{r|r�r�rgrFrG)�_otherr�r�rr r��s
)N)N)�popr=�listrjr)r�r�r�r�r)r�r�r�r r�s



c@seZdZdZdd�ZdS)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    cCs
||_dS)N)r�)r8r�rrr r:�szdefault.__init__N)r*r+r,r-r:rrrr r�sc@s"eZdZdZddd�Zdd�ZdS)	rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    rdcCs||_||_||_dS)N)r�prefix�suffix)r8rr�r�rrr r:�szwords.__init__cCst|j|j|jd�S)N)r�r�)rrr�r�)r8rrr r6�sz	words.getN)rdrd)r*r+r,r-r:r6rrrr r�s
c@sJeZdZdZdd�Zdd�Zdd�Zdd	�Zddd�Zd
d�Z	dd�Z
d
S)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    cCs t|t�r|��}t�||�jS)zBPreprocess the regular expression component of a token definition.)r=rr6�re�compiler{)rk�regex�rflagsr�rrr �_process_regex�s
zRegexLexerMeta._process_regexcCs&t|�tks"t|�s"td|f��|S)z5Preprocess the token component of a token definition.z2token type must be simple type or callable, not %r)r%r	�callable�AssertionError)rk�tokenrrr �_process_token�szRegexLexerMeta._process_tokencCst|t�rd|dkrdS||kr$|fS|dkr0|S|dd�dkrRt|dd��Sdsbtd|��n�t|t�r�d	|j}|jd
7_g}x4|D],}||ks�td|��|�|�|||��q�W|||<|fSt|t��rx(|D] }||ks�|dks�td
|��q�W|Sd�std|��dS)z=Preprocess the state transition action of a token definition.z#pop���z#pushN�z#pop:Fzunknown new state %rz_tmp_%drwzcircular state ref %r)z#popz#pushzunknown new state zunknown new state def %r)	r=rI�intr�ri�_tmpname�extend�_process_staterj)rk�	new_state�unprocessed�	processedZ	tmp_state�itokensZistaterrr �_process_new_state�s6






z!RegexLexerMeta._process_new_statecCs�t|�tkstd|��|ddks0td|��||kr@||Sg}||<|j}�x<||D�].}t|t�r�||ks�td|��|�|�||t|���q^t|t�r�q^t|t	�r�|�
|j||�}|�t
�d�jd|f�q^t|�tks�td|��y|�|d||�}Wn>tk
�rH}	ztd	|d|||	f�|	�Wdd}	~	XYnX|�|d
�}
t|�dk�rld}n|�
|d||�}|�||
|f�q^W|S)z%Preprocess a single state definition.zwrong state name %rr�#zinvalid state name %rzcircular state reference %rrdNzwrong rule def %rz+uncompilable regex %r in state %r of %r: %srw�)r%rIr��flagsr=rr�r�rhrr�r�r>r�r�r{rjr��	Exception�
ValueErrorr�rN)rkr�r�r��tokensr�Ztdefr��rex�errr�rrr r�s>


(
zRegexLexerMeta._process_stateNcCs@i}|j|<|p|j|}xt|�D]}|�|||�q&W|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)rkr'�	tokendefsr�r�rrr �process_tokendef?s
zRegexLexerMeta.process_tokendefc

Cs�i}i}x�|jD]�}|j�di�}x�|��D]�\}}|�|�}|dkr~|||<y|�t�}Wntk
rrw,YnX|||<q,|�|d�}|dkr�q,||||d�<y|�t�}	Wntk
r�Yq,X||	||<q,WqW|S)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nrw)�__mro__�__dict__r6�items�indexrr�r�)
rkr��inheritable�c�toksr�r�ZcuritemsZinherit_ndxZnew_inh_ndxrrr �
get_tokendefsGs0
zRegexLexerMeta.get_tokendefscOsLd|jkr:i|_d|_t|d�r(|jr(n|�d|���|_tj	|f|�|�S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsrd)
r�r�r��hasattrr�r�r�r�r%�__call__)rkrl�kwdsrrr r�xs
zRegexLexerMeta.__call__)N)r*r+r,r-r�r�r�r�r�r�r�rrrr r��s#,
1r�c@s$eZdZdZejZiZddd�ZdS)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    �r�ccs�d}|j}t|�}||d}�x��x�|D�]*\}}}	|||�}
|
r*|dk	rzt|�tkrj|||
��fVn|||
�EdH|
��}|	dk	�rTt|	t�r�x�|	D]D}|dkr�t|�dkr�|�	�q�|dkr�|�
|d�q�|�
|�q�Wnbt|	t��rt|	�t|�k�r|dd�=n
||	d�=n,|	dk�r6|�
|d�nd�sHt
d|	��||d}Pq*WyN||d	k�r�d
g}|d
}|td	fV|d7}w"|t||fV|d7}Wq"tk
�r�PYq"Xq"WdS)z~
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the initial stack (default: ``['root']``)
        rr�Nz#poprwz#pushFzwrong state def: %rrBr�)r�r�r%r	rtrrr=rjrNr�r>r��absr�rrrs)r8r?r�ryr�Z
statestack�statetokens�rexmatchr}r��mr�rrr rD�sT





z!RegexLexer.get_tokens_unprocessedN)r�)	r*r+r,r-r��	MULTILINEr�r�rDrrrr r�sc@s"eZdZdZddd�Zdd�ZdS)rz9
    A helper object that holds lexer position data.
    NcCs*||_||_|pt|�|_|p"dg|_dS)Nr�)r?ryrNrrr�)r8r?ryr�rrrrr r:�szLexerContext.__init__cCsd|j|j|jfS)NzLexerContext(%r, %r, %r))r?ryr�)r8rrr r<�szLexerContext.__repr__)NN)r*r+r,r-r:r<rrrr r�s
c@seZdZdZddd�ZdS)rzE
    A RegexLexer that uses a context object to store its state.
    Nccs@|j}|st|d�}|d}n|}||jd}|j}�x�x�|D�]b\}}}|||j|j�}	|	rB|dk	r�t|�tkr�|j||	��fV|	��|_n$|||	|�EdH|s�||jd}|dk	�r�t	|t
��r*x�|D]P}
|
dkr�t|j�dkr�|j��q�|
dk�r|j�
|jd�q�|j�
|
�q�Wnlt	|t��rft|�t|j�k�rX|jdd�=n|j|d�=n0|dk�r�|j�
|jd�nd�s�td	|��||jd}PqBWyt|j|jk�r�P||jd
k�r�dg|_|d}|jtd
fV|jd7_w:|jt||jfV|jd7_Wq:tk
�r6PYq:Xq:WdS)z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�Nz#poprwz#pushFzwrong state def: %rrB)r�rr�r?ryrrr%r	rtr=rjrNr�r>r�r�r�rrrs)r8r?�contextr�r|r�r�r}r�r�r�rrr rD�sb





z)ExtendedRegexLexer.get_tokens_unprocessed)NN)r*r+r,r-rDrrrr r�sc	cs�t|�}yt|�\}}Wntk
r6|EdHdSXd}d}�x�|D]�\}}}|dkr^|}d}	x�|�r|t|�|k�r||	||�}
|
r�|||
fV|t|
�7}x*|D]"\}}}
|||
fV|t|
�7}q�W||}	yt|�\}}Wqdtk
�rd}PYqdXqdW|	t|�krH||||	d�fV|t|�|	7}qHWxr|�r�|�pRd}x,|D]$\}}}|||fV|t|�7}�qZWyt|�\}}Wntk
�r�d}PYnX�qDWdS)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrN)rfr�r�r��realposZinsleftrgrFrGZoldiZtmpvalZit_indexZit_tokenZit_value�prrr re?sN

rec@seZdZdZdd�ZdS)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.csLt|t�r t|j|j|jd��n|�t��|��tjf����fdd�	}|S)N)r�r�cs`�jd���fddg�}t��}��|||�}t��}|dd7<|d||7<|S)Nr�rgrw)�
_prof_data�
setdefault�timer{)r?ry�endpos�info�t0�res�t1)rk�compiledr�r�rr �
match_func�sz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func)	r=rrr�r�r�r��sys�maxsize)rkr�r�r�r�r)rkr�r�r�r r��s

z&ProfilingRegexLexerMeta._process_regexN)r*r+r,r-r�rrrr r�sr�c@s"eZdZdZgZdZddd�ZdS)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.��r�c#s��jj�i�t��||�EdH�jj��}tdd�|��D��fdd�dd�}tdd�|D��}t	�t	d�jj
t|�|f�t	d	�t	d
d�t	d�x|D]}t	d
|�q�Wt	d	�dS)NcssN|]F\\}}\}}|t|��d��dd�dd�|d|d||fVqdS)zu'z\\�\N�Ai�)�reprrQrA)�.0r��r�nrFrrr �	<genexpr>�sz=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>cs
|�jS)N)�_prof_sort_index)r)r8rr r!�r"z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>T)�key�reversecss|]}|dVqdS)�Nr)r�rrrr r��sz2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)r;r�r>rrDr��sortedr��sum�printr*rN)r8r?r��rawdatar~Z	sum_totalr)r)r8r rD�s"


z*ProfilingRegexLexer.get_tokens_unprocessedN)r�)r*r+r,r-r�r�rDrrrr r��sr�)6r-r�r�r�Zpip._vendor.pygments.filterrrZpip._vendor.pygments.filtersrZpip._vendor.pygments.tokenrrrrr	Zpip._vendor.pygments.utilr
rrr
rrZpip._vendor.pygments.regexoptr�__all__r�rrL�staticmethodZ_default_analyser%r#rrrIrrhrrjrirmrr�rrrrr�rrrrer�r�rrrr �<module>	sR 

]'
2)aH@