[{"text": "This class is a wrapper over the Dataset component and can be used to\ncreate Examples for Blocks / Interfaces. Populates the Dataset component with\nexamples and assigns event listener so that clicking on an example populates\nthe input/output components. Optionally handles example caching for fast\ninference. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/examples", "source_page_title": "Gradio - Examples Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n examples: list[Any] | list[list[Any]] | str\n\nexample inputs that can be clicked to populate specific components. Should be\nnested list, in which the outer list consists of samples and each inner list\nconsists of an input corresponding to each input component. A string path to a\ndirectory of examples can also be provided but it should be within the\ndirectory with the python file running the gradio app. If there are multiple\ninput components and a directory is provided, a log.csv file must be present\nin the directory to link corresponding inputs.\n\n\n \n \n inputs: Component | list[Component]\n\nthe component or list of components corresponding to the examples\n\n\n \n \n outputs: Component | list[Component] | None\n\ndefault `= None`\n\noptionally, provide the component or list of components corresponding to the\noutput of the examples. Required if `cache_examples` is not False.\n\n\n \n \n fn: Callable | None\n\ndefault `= None`\n\noptionally, provide the function to run to generate the outputs corresponding\nto the examples. Required if `cache_examples` is not False. Also required if\n`run_on_click` is True.\n\n\n \n \n cache_examples: bool | None\n\ndefault `= None`\n\nIf True, caches examples in the server for fast runtime in examples. If\n\"lazy\", then examples are cached (for all users of the app) after their first\nuse (by any user of the app). If None, will use the GRADIO_CACHE_EXAMPLES\nenvironment variable, which should be either \"true\" or \"false\". In HuggingFace\nSpaces, this parameter is True (as long as `fn` and `outputs` are also\nprovided). The default option otherwise is False. Note that examples are\ncached separately from Gradio's queue() so certain features, such as\ngr.Progress(), gr.Info(), gr.Warning(), etc. will not be displayed in Gradio's\nUI for cached examples.\n\n\n \n \n cache_mode: Literal['eager', 'lazy'] | None\n\ndefault `= None`\n\nif \"lazy\", examples are cached after their first use. If \"eager\", all examples\nare ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/examples", "source_page_title": "Gradio - Examples Docs"}, {"text": "d in Gradio's\nUI for cached examples.\n\n\n \n \n cache_mode: Literal['eager', 'lazy'] | None\n\ndefault `= None`\n\nif \"lazy\", examples are cached after their first use. If \"eager\", all examples\nare cached at app launch. If None, will use the GRADIO_CACHE_MODE environment\nvariable if defined, or default to \"eager\".\n\n\n \n \n examples_per_page: int\n\ndefault `= 10`\n\nhow many examples to show per page.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= \"Examples\"`\n\nthe label to use for the examples component (by default, \"Examples\")\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nan optional string that is assigned as the id of this component in the HTML\nDOM.\n\n\n \n \n run_on_click: bool\n\ndefault `= False`\n\nif cache_examples is False, clicking on an example does not run the function\nwhen an example is clicked. Set this to True to run the function when an\nexample is clicked. Has no effect if cache_examples is True.\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nif True, preprocesses the example input before running the prediction function\nand caching the output. Only applies if `cache_examples` is not False.\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nif True, postprocesses the example output after running the prediction\nfunction and before caching. Only applies if `cache_examples` is not False.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"undocumented\"`\n\nControls the visibility of the event associated with clicking on the examples.\nCan be \"public\" (shown in API docs and callable), \"private\" (hidden from API\ndocs and not callable), or \"undocumented\" (hidden from API docs but callable).\n\n\n \n \n api_name: str | None\n\ndefault `= \"load_example\"`\n\nDefines how the event associated with clicking on the examples appears in the\nAPI docs. Can be a string or None. If set to a string, the endpoint will be\nexposed in the API docs with the given name. If None, an auto-generated name\nwill be u", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/examples", "source_page_title": "Gradio - Examples Docs"}, {"text": "icking on the examples appears in the\nAPI docs. Can be a string or None. If set to a string, the endpoint will be\nexposed in the API docs with the given name. If None, an auto-generated name\nwill be used.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the event associated with clicking on the examples in the API\ndocs. Can be a string, None, or False. If set to a string, the endpoint will\nbe exposed in the API docs with the given description. If None, the function's\ndocstring will be used as the API endpoint description. If False, then no\ndescription will be displayed in the API docs.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. Used only if\ncache_examples is not False.\n\n\n \n \n example_labels: list[str] | None\n\ndefault `= None`\n\nA list of labels for each example. If provided, the length of this list should\nbe the same as the number of examples, and these labels will be used in the UI\ninstead of rendering the example values.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, the examples component will be hidden in the UI.\n\n\n \n \n preload: int | Literal[False]\n\ndefault `= 0`\n\nIf an integer is provided (and examples are being cached eagerly and none of\nthe input components have a developer-provided `value`), the example at that\nindex in the examples list will be preloaded when the Gradio app is first\nloaded. If False, no example will be preloaded.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/examples", "source_page_title": "Gradio - Examples Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n dataset: gradio.Dataset\n\nThe `gr.Dataset` component corresponding to this Examples object.\n\n\n \n \n load_input_event: gradio.events.Dependency\n\nThe Gradio event that populates the input values when the examples are\nclicked. You can attach a `.then()` or a `.success()` to this event to trigger\nsubsequent events to fire after this event.\n\n\n \n \n cache_event: gradio.events.Dependency | None\n\nThe Gradio event that populates the cached output values when the examples are\nclicked. You can attach a `.then()` or a `.success()` to this event to trigger\nsubsequent events to fire after this event. This event is `None` if\n`cache_examples` if False, and is the same as `load_input_event` if\n`cache_examples` is `'lazy'`.\n\n", "heading1": "Attributes", "source_page_url": "https://gradio.app/docs/gradio/examples", "source_page_title": "Gradio - Examples Docs"}, {"text": "**Updating Examples**\n\nIn this demo, we show how to update the examples by updating the samples of\nthe underlying dataset. Note that this only works if `cache_examples=False` as\nupdating the underlying dataset does not update the cache.\n\n \n \n import gradio as gr\n \n def update_examples(country):\n if country == \"USA\":\n return gr.Dataset(samples=[[\"Chicago\"], [\"Little Rock\"], [\"San Francisco\"]])\n else:\n return gr.Dataset(samples=[[\"Islamabad\"], [\"Karachi\"], [\"Lahore\"]])\n \n with gr.Blocks() as demo:\n dropdown = gr.Dropdown(label=\"Country\", choices=[\"USA\", \"Pakistan\"], value=\"USA\")\n textbox = gr.Textbox()\n examples = gr.Examples([[\"Chicago\"], [\"Little Rock\"], [\"San Francisco\"]], textbox)\n dropdown.change(update_examples, dropdown, examples.dataset)\n \n demo.launch()\n\n", "heading1": "Examples", "source_page_url": "https://gradio.app/docs/gradio/examples", "source_page_title": "Gradio - Examples Docs"}, {"text": "calculator_blocks\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/examples", "source_page_title": "Gradio - Examples Docs"}, {"text": "This class allows you to pass custom error messages to the user. You can do\nso by raising a gr.Error(\"custom message\") anywhere in the code, and when that\nline is executed the custom message will appear in a modal on the demo.\n\nYou can control for how long the error message is displayed with the\n`duration` parameter. If it\u2019s `None`, the message will be displayed forever\nuntil the user closes it. If it\u2019s a number, it will be shown for that many\nseconds.\n\nYou can also hide the error modal from being shown in the UI by setting\n`visible=False`.\n\nBelow is a demo of how different values of duration control the error,\ninfo, and warning messages. You can see the code\n[here](https://huggingface.co/spaces/freddyaboulton/gradio-error-\nduration/blob/244331cf53f6b5fa2fd406ece3bf55c6ccb9f5f2/app.pyL17).\n\n![modal_control](https://github.com/gradio-\napp/gradio/assets/41651716/f0977bcd-eaec-4eca-a2fd-ede95fdb8fd2)\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/error", "source_page_title": "Gradio - Error Docs"}, {"text": "import gradio as gr\n def divide(numerator, denominator):\n if denominator == 0:\n raise gr.Error(\"Cannot divide by zero!\")\n gr.Interface(divide, [\"number\", \"number\"], \"number\").launch()\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/error", "source_page_title": "Gradio - Error Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n message: str\n\ndefault `= \"Error raised.\"`\n\nThe error message to be displayed to the user. Can be HTML, which will be\nrendered in the modal.\n\n\n \n \n duration: float | None\n\ndefault `= 10`\n\nThe duration in seconds to display the error message. If None or 0, the error\nmessage will be displayed until the user closes it.\n\n\n \n \n visible: bool\n\ndefault `= True`\n\nWhether the error message should be displayed in the UI.\n\n\n \n \n title: str\n\ndefault `= \"Error\"`\n\nThe title to be displayed to the user at the top of the error modal.\n\n\n \n \n print_exception: bool\n\ndefault `= True`\n\nWhether to print traceback of the error to the console when the error is\nraised.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/error", "source_page_title": "Gradio - Error Docs"}, {"text": "calculatorblocks_chained_events\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/error", "source_page_title": "Gradio - Error Docs"}, {"text": "Creates a \"Sign In\" button that redirects the user to sign in with Hugging\nFace OAuth. Once the user is signed in, the button will act as a logout\nbutton, and you can retrieve a signed-in user's profile by adding a parameter\nof type `gr.OAuthProfile` to any Gradio function. This will only work if this\nGradio app is running in a Hugging Face Space. Permissions for the OAuth app\ncan be configured in the Spaces README file, as described here:\n For local development,\ninstead of OAuth, the local Hugging Face account that is logged in (via `hf\nauth login`) will be available through the `gr.OAuthProfile` object. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "**As input component** : (Rarely used) the `str` corresponding to the\nbutton label when the button is clicked\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: str | None\n )\n \t...\n\n \n\n**As output component** : string corresponding to the button label\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> str | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: str\n\ndefault `= \"Sign in with Hugging Face\"`\n\n\n \n \n logout_value: str\n\ndefault `= \"Logout ({})\"`\n\nThe text to display when the user is signed in. The string should contain a\nplaceholder for the username with a call-to-action to logout, e.g. \"Logout\n({})\".\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\n\n \n \n variant: Literal['primary', 'secondary', 'stop', 'huggingface']\n\ndefault `= \"huggingface\"`\n\n\n \n \n size: Literal['sm', 'md', 'lg']\n\ndefault `= \"lg\"`\n\n\n \n \n icon: str | Path | None\n\ndefault `= \"/home/runner/work/gradio/gradio/gradio/icons/huggingface-\nlogo.svg\"`\n\n\n \n \n link: str | None\n\ndefault `= None`\n\n\n \n \n link_target: Literal['_self', '_blank', '_parent', '_top']\n\ndefault `= \"_self\"`\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\n\n \n \n interactive: bool\n\ndefault `= True`\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\n\n \n \n render: bool\n\ndefault `= True`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\n\n \n \n min_width: int | None\n\ndefault `= None`\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.LoginButton`| \"loginbutton\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "login_with_huggingface\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe LoginButton component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`LoginButton.click(fn, \u00b7\u00b7\u00b7)`| Triggered when the Button is clicked. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs wi", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": " api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preproces", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "lt `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter i", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "one to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/loginbutton", "source_page_title": "Gradio - Loginbutton Docs"}, {"text": "Tab (or its alias TabItem) is a layout element. Components defined within\nthe Tab will be visible when this tab is selected tab.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "with gr.Blocks() as demo:\n with gr.Tab(\"Lion\"):\n gr.Image(\"lion.jpg\")\n gr.Button(\"New Lion\")\n with gr.Tab(\"Tiger\"):\n gr.Image(\"tiger.jpg\")\n gr.Button(\"New Tiger\")\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nThe visual label for the tab\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, Tab will be hidden.\n\n\n \n \n interactive: bool\n\ndefault `= True`\n\nIf False, Tab will not be clickable.\n\n\n \n \n id: int | str | None\n\ndefault `= None`\n\nAn optional identifier for the tab, required if you wish to control the\nselected tab from a predict function.\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of the
containing the\ncontents of the Tab layout. The same string followed by \"-button\" is attached\nto the Tab button. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional string or list of strings that are assigned as the class of this\ncomponent in the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent elements. 1 or greater indicates the Tab\nwill expand in size.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, this layout will not be rendered in the Blocks context. Should be\nused if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= None`\n\n\n \n \n render_children: bool\n\ndefault `= False`\n\nIf True, the children of this Tab will be rendered on the page (but hidden)\nwhen the Tab is visible but inactive. This can be useful if you want to ensure\nthat any components (e.g. videos or audio) within the Tab are pre-loaded\nbefore the user clicks on the Tab.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "", "heading1": "Methods", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\n \n \n gradio.Tab.select(\u00b7\u00b7\u00b7)\n\nDescription\n![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%", "heading1": "select", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\nEvent listener for when the user selects or deselects the Tab. Uses event data\ngradio.SelectData to carry `value` referring to the label of the Tab, and\n`selected` to refer to state of the Tab. See EventData documentation on how to\nuse this event data\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ni", "heading1": "select", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "eral['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n", "heading1": "select", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": " shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators tha", "heading1": "select", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "d. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visi", "heading1": "select", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "I docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "select", "source_page_url": "https://gradio.app/docs/gradio/tab", "source_page_title": "Gradio - Tab Docs"}, {"text": "Displays a classification label, along with confidence scores of top\ncategories, if provided. As this component does not accept user input, it is\nrarely used as an input component. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "**As input component** : Depending on the value, passes the label as a `str | int | float`, or the labels and confidences as a `dict[str, float]`.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: dict[str, float] | str | int | float | None\n )\n \t...\n\n \n\n**As output component** : Expects a `dict[str, float]` of classes and confidences, or `str` with just the class or an `int | float` for regression outputs, or a `str` path to a .json file containing a json dictionary in one of the preceding formats.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> dict[str, float] | str | int | float | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: dict[str, float] | str | float | Callable | None\n\ndefault `= None`\n\nDefault value to show in the component. If a str or number is provided, simply\ndisplays the string or number. If a {Dict[str, float]} of classes and\nconfidences is provided, displays the top class on top and the\n`num_top_classes` below, along with their confidence bars. If a function is\nprovided, the function will be called each time the app loads to set the\ninitial value of this component.\n\n\n \n \n num_top_classes: int | None\n\ndefault `= None`\n\nnumber of most confident classes to show.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component. Appears above the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_he", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n color: str | Non", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "meters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n color: str | None\n\ndefault `= None`\n\nThe background color of the label (either a valid css color name or\nhexadecimal string).\n\n\n \n \n show_heading: bool\n\ndefault `= True`\n\nIf False, the heading will not be displayed if a dictionary of labels and\nconfidences is provided. The heading will still be visible if the value is a\nstring or number.\n\n\n \n \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of gr.Button() instances to show in the top right corner of the\ncomponent. Custom buttons will appear in the toolbar with their configured\nicon and/or label, and clicking them will trigger any .click() events\nregistered on the button.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Label`| \"label\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Label component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Label.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Label changes either\nbecause of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Label.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or deselects\nthe Label. Uses event data gradio.SelectData to carry `value` referring to the\nlabel of the Label, and `selected` to refer to state of the Label. See\nEventData documentation on how to use this event data \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "ext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input v", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "ll use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. In", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "vents) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main fun", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "ault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/label", "source_page_title": "Gradio - Label Docs"}, {"text": "Creates an image component that, as an input, can be used to upload and\nedit images using simple editing tools such as brushes, strokes, cropping, and\nlayers. Or, as an output, this component can be used to display images. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "**As input component** : Passes the uploaded images as an instance of\nEditorValue, which is just a `dict` with keys: 'background', 'layers', and\n'composite'. The values corresponding to 'background' and 'composite' are\nimages, while 'layers' is a `list` of images. The images are of type\n`PIL.Image`, `np.array`, or `str` filepath, depending on the `type` parameter.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: EditorValue | None\n )\n \t...\n\n \n\n**As output component** : Expects a EditorValue, which is just a dictionary\nwith keys: 'background', 'layers', and 'composite'. The values corresponding\nto 'background' and 'composite' should be images or None, while `layers`\nshould be a list of images. Images can be of type `PIL.Image`, `np.array`, or\n`str` filepath/URL. Or, the value can be simply a single image (`ImageType`),\nin which case it will be used as the background.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> EditorValue | ImageType | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: EditorValue | ImageType | None\n\ndefault `= None`\n\nOptional initial image(s) to populate the image editor. Should be a dictionary\nwith keys: `background`, `layers`, and `composite`. The values corresponding\nto `background` and `composite` should be images or None, while `layers`\nshould be a list of images. Images can be of type PIL.Image, np.array, or str\nfilepath/URL. Or, the value can be a callable, in which case the function will\nbe called whenever the app loads to set the initial value of the component.\n\n\n \n \n height: int | str | None\n\ndefault `= None`\n\nThe height of the component, specified in pixels if a number is passed, or in\nCSS units if a string is passed. This has no effect on the preprocessed image\nfiles or numpy arrays, but will affect the displayed images. Beware of\nconflicting values with the canvas_size parameter. If the canvas_size is\nlarger than the height, the editing canvas will not fit in the component.\n\n\n \n \n width: int | str | None\n\ndefault `= None`\n\nThe width of the component, specified in pixels if a number is passed, or in\nCSS units if a string is passed. This has no effect on the preprocessed image\nfiles or numpy arrays, but will affect the displayed images. Beware of\nconflicting values with the canvas_size parameter. If the canvas_size is\nlarger than the height, the editing canvas will not fit in the component.\n\n\n \n \n image_mode: Literal['1', 'L', 'P', 'RGB', 'RGBA', 'CMYK', 'YCbCr', 'LAB', 'HSV', 'I', 'F']\n\ndefault `= \"RGBA\"`\n\n\"RGB\" if color, or \"L\" if black and white. See\nhttps://pillow.readthedocs.io/en/stable/handbook/concepts.html for other\nsupported image modes and their meaning.\n\n\n \n \n sources: Iterable[Literal['upload', 'webcam', 'clipboard']] | Literal['upload', 'webcam', 'clipboard'] | None\n\ndefault `= ('upload', 'webcam', 'clipboard')`\n\nList of sources that can be used to set the background image. \"upload\" creates\na box where user can drop an image file, \"we", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "webcam', 'clipboard'] | None\n\ndefault `= ('upload', 'webcam', 'clipboard')`\n\nList of sources that can be used to set the background image. \"upload\" creates\na box where user can drop an image file, \"webcam\" allows user to take snapshot\nfrom their webcam, \"clipboard\" allows users to paste an image from the\nclipboard.\n\n\n \n \n type: Literal['numpy', 'pil', 'filepath']\n\ndefault `= \"numpy\"`\n\nThe format the images are converted to before being passed into the prediction\nfunction. \"numpy\" converts the images to numpy arrays with shape (height,\nwidth, 3) and values from 0 to 255, \"pil\" converts the images to PIL image\nobjects, \"filepath\" passes images as str filepaths to temporary copies of the\nimages.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component. Appears above the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n buttons: list[Literal['download', 'share', 'fullscreen']] | None\n\ndefault `= None`\n\nA list of buttons to show in the corner of the component. Valid options are\n\"download\" to download the image, \"share\" to share to Hugging Face Spaces\nDiscussions, and \"fullscreen\" to view in fullscreen mode. By default, all\nbuttons are shown.\n\n\n \n ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "nt. Valid options are\n\"download\" to download the image, \"share\" to share to Hugging Face Spaces\nDiscussions, and \"fullscreen\" to view in fullscreen mode. By default, all\nbuttons are shown.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nif True, will allow users to upload and edit an image; if False, can only be\nused to display images. If not provided, this is inferred based on whether the\ncomponent is used as an input or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "r: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n placeholder: str | None\n\ndefault `= None`\n\nCustom text for the upload area. Overrides default upload messages when\nprovided. Accepts new lines and `` to designate a heading.\n\n\n \n \n transforms: Iterable[Literal['crop', 'resize']] | None\n\ndefault `= ('crop', 'resize')`\n\nThe transforms tools to make available to users. \"crop\" allows the user to\ncrop the image.\n\n\n \n \n eraser: Eraser | None | Literal[False]\n\ndefault `= None`\n\nThe options for the eraser tool in the image editor. Should be an instance of\nthe `gr.Eraser` class, or None to use the default settings. Can also be False\nto hide the eraser tool. See `gr.Eraser` docs.\n\n\n \n \n brush: Brush | None | Literal[False]\n\ndefault `= None`\n\nThe options for the brush tool in the image editor. Should be an instance of\nthe `gr.Brush` class, or None to use the default settings. Can also be False\nto hide the brush tool, which will also hide the eraser tool. See `gr.Brush`\ndocs.\n\n\n \n \n format: str\n\ndefault `= \"webp\"`\n\nFormat to save image if it does not already have a valid format (e.g. if the\nimage is being returned to", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "also hide the eraser tool. See `gr.Brush`\ndocs.\n\n\n \n \n format: str\n\ndefault `= \"webp\"`\n\nFormat to save image if it does not already have a valid format (e.g. if the\nimage is being returned to the frontend as a numpy array or PIL Image). The\nformat should be supported by the PIL library. This parameter has no effect on\nSVG files.\n\n\n \n \n layers: bool | LayerOptions\n\ndefault `= True`\n\nThe options for the layer tool in the image editor. Can be a boolean or an\ninstance of the `gr.LayerOptions` class. If True, will allow users to add\nlayers to the image. If False, the layers option will be hidden. If an\ninstance of `gr.LayerOptions`, it will be used to configure the layer tool.\nSee `gr.LayerOptions` docs.\n\n\n \n \n canvas_size: tuple[int, int]\n\ndefault `= (800, 800)`\n\nThe initial size of the canvas in pixels. The first value is the width and the\nsecond value is the height. If `fixed_canvas` is `True`, uploaded images will\nbe rescaled to fit the canvas size while preserving the aspect ratio.\nOtherwise, the canvas size will change to match the size of an uploaded image.\n\n\n \n \n fixed_canvas: bool\n\ndefault `= False`\n\nIf True, the canvas size will not change based on the size of the background\nimage and the image will be rescaled to fit (while preserving the aspect\nratio) and placed in the center of the canvas.\n\n\n \n \n webcam_options: WebcamOptions | None\n\ndefault `= None`\n\nThe options for the webcam tool in the image editor. Can be an instance of the\n`gr.WebcamOptions` class, or None to use the default settings. See\n`gr.WebcamOptions` docs.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.ImageEditor`| \"imageeditor\"| Uses default values \n`gradio.Sketchpad`| \"sketchpad\"| Uses sources=(),\nbrush=Brush(colors=[\"000000\"], color_mode=\"fixed\") \n`gradio.Paint`| \"paint\"| Uses sources=() \n`gradio.ImageMask`| \"imagemask\"| Uses brush=Brush(colors=[\"000000\"],\ncolor_mode=\"fixed\") \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "image_editor\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe ImageEditor component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`ImageEditor.clear(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user clears\nthe ImageEditor using the clear button for the component. \n`ImageEditor.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the ImageEditor\nchanges either because of user input (e.g. a user types in a textbox) OR\nbecause of a function update (e.g. an image receives a value from the output\nof an event trigger). See `.input()` for a listener that is only triggered by\nuser input. \n`ImageEditor.input(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user changes\nthe value of the ImageEditor. \n`ImageEditor.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or\ndeselects the ImageEditor. Uses event data gradio.SelectData to carry `value`\nreferring to the label of the ImageEditor, and `selected` to refer to state of\nthe ImageEditor. See EventData documentation on how to use this event data \n`ImageEditor.upload(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nuploads a file into the ImageEditor. \n`ImageEditor.apply(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user applies\nchanges to the ImageEditor through an integrated UI action. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "achine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "rn value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "umented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "", "heading1": "Helper Classes", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "gradio.Brush(\u00b7\u00b7\u00b7)\n\nDescription\n\nA dataclass for specifying options for the brush tool in the ImageEditor\ncomponent. An instance of this class can be passed to the `brush` parameter of\n`gr.ImageEditor`.\n\nInitialization\n\nParameters \u25bc\n\n\n \n \n default_size: int | Literal['auto']\n\ndefault `= \"auto\"`\n\nThe default radius, in pixels, of the brush tool. Defaults to \"auto\" in which\ncase the radius is automatically determined based on the size of the image\n(generally 1/50th of smaller dimension).\n\n\n \n \n colors: list[str | tuple[str, float]] | str | tuple[str, float] | None\n\ndefault `= None`\n\nA list of colors to make available to the user when using the brush. Defaults\nto a list of 5 colors.\n\n\n \n \n default_color: str | tuple[str, float] | None\n\ndefault `= None`\n\nThe default color of the brush. Defaults to the first color in the `colors`\nlist.\n\n\n \n \n color_mode: Literal['fixed', 'defaults']\n\ndefault `= \"defaults\"`\n\nIf set to \"fixed\", user can only select from among the colors in `colors`. If\n\"defaults\", the colors in `colors` are provided as a default palette, but the\nuser can also select any color using a color picker.\n\n", "heading1": "Brush", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "gradio.Eraser(\u00b7\u00b7\u00b7)\n\nDescription\n\nA dataclass for specifying options for the eraser tool in the ImageEditor\ncomponent. An instance of this class can be passed to the `eraser` parameter\nof `gr.ImageEditor`.\n\nInitialization\n\nParameters \u25bc\n\n\n \n \n default_size: int | Literal['auto']\n\ndefault `= \"auto\"`\n\nThe default radius, in pixels, of the eraser tool. Defaults to \"auto\" in which\ncase the radius is automatically determined based on the size of the image\n(generally 1/50th of smaller dimension).\n\n", "heading1": "Eraser", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "gradio.LayerOptions(\u00b7\u00b7\u00b7)\n\nDescription\n\nA dataclass for specifying options for the layer tool in the ImageEditor\ncomponent. An instance of this class can be passed to the `layers` parameter\nof `gr.ImageEditor`.\n\nInitialization\n\nParameters \u25bc\n\n\n \n \n allow_additional_layers: bool\n\ndefault `= True`\n\nIf True, users can add additional layers to the image. If False, the add layer\nbutton will not be shown.\n\n\n \n \n layers: list[str] | None\n\ndefault `= None`\n\nA list of layers to make available to the user when using the layer tool. One\nlayer must be provided, if the length of the list is 0 then a layer will be\ngenerated automatically.\n\n\n \n \n disabled: bool\n\ndefault `= False`\n\n", "heading1": "Layer Options", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "gradio.WebcamOptions(\u00b7\u00b7\u00b7)\n\nDescription\n\nA dataclass for specifying options for the webcam tool in the ImageEditor\ncomponent. An instance of this class can be passed to the `webcam_options`\nparameter of `gr.ImageEditor`.\n\nInitialization\n\nParameters \u25bc\n\n\n \n \n mirror: bool\n\ndefault `= True`\n\nIf True, the webcam will be mirrored.\n\n\n \n \n constraints: dict[str, Any] | None\n\ndefault `= None`\n\nA dictionary of constraints for the webcam.\n\n", "heading1": "Webcam Options", "source_page_url": "https://gradio.app/docs/gradio/imageeditor", "source_page_title": "Gradio - Imageeditor Docs"}, {"text": "Creates a bar plot component to display data from a pandas DataFrame. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "**As input component** : The data to display in a line plot.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: AltairPlotData\n )\n \t...\n\n \n\n**As output component** : Expects a pandas DataFrame containing the data to\ndisplay in the line plot. The DataFrame should contain at least two columns,\none for the x-axis (corresponding to this component's `x` argument) and one\nfor the y-axis (corresponding to `y`).\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> pd.DataFrame | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: pd.DataFrame | Callable | None\n\ndefault `= None`\n\nThe pandas dataframe containing the data to display in the plot.\n\n\n \n \n x: str | None\n\ndefault `= None`\n\nColumn corresponding to the x axis. Column can be numeric, datetime, or\nstring/category.\n\n\n \n \n y: str | None\n\ndefault `= None`\n\nColumn corresponding to the y axis. Column must be numeric.\n\n\n \n \n color: str | None\n\ndefault `= None`\n\nColumn corresponding to series, visualized by color. Column must be\nstring/category.\n\n\n \n \n title: str | None\n\ndefault `= None`\n\nThe title to display on top of the chart.\n\n\n \n \n x_title: str | None\n\ndefault `= None`\n\nThe title given to the x axis. By default, uses the value of the x parameter.\n\n\n \n \n y_title: str | None\n\ndefault `= None`\n\nThe title given to the y axis. By default, uses the value of the y parameter.\n\n\n \n \n color_title: str | None\n\ndefault `= None`\n\nThe title given to the color legend. By default, uses the value of color\nparameter.\n\n\n \n \n x_bin: str | float | None\n\ndefault `= None`\n\nGrouping used to cluster x values. If x column is numeric, should be number to\nbin the x values. If x column is datetime, should be string such as \"1h\",\n\"15m\", \"10s\", using \"s\", \"m\", \"h\", \"d\" suffixes.\n\n\n \n \n y_aggregate: Literal['sum', 'mean', 'median', 'min', 'max', 'count'] | None\n\ndefault `= None`\n\nAggregation function used to aggregate y values, used if x_bin is provided or\nx is a string/category. Must be one of \"sum\", \"mean\", \"median\", \"min\", \"max\".\n\n\n \n \n color_map: dict[str, str] | None\n\ndefault `= None`\n\nMapping of series to color names or codes. For example, {\"success\": \"green\",\n\"fail\": \"FF8888\"}.\n\n\n \n \n colors_in_legend: list[str] | None\n\ndefault `= None`\n\nList containing column names of the series to show in the legend. By default,\nall series are shown.\n\n\n \n \n x_lim: list[float | None] | None\n\ndefault `= None`\n\nA tuple or list containing ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "ne`\n\nList containing column names of the series to show in the legend. By default,\nall series are shown.\n\n\n \n \n x_lim: list[float | None] | None\n\ndefault `= None`\n\nA tuple or list containing the limits for the x-axis, specified as [x_min,\nx_max]. To fix only one of these values, set the other to None, e.g. [0, None]\nto scale from 0 to the maximum value. If x column is datetime type, x_lim\nshould be timestamps.\n\n\n \n \n y_lim: list[float | None]\n\ndefault `= None`\n\nA tuple of list containing the limits for the y-axis, specified as [y_min,\ny_max]. To fix only one of these values, set the other to None, e.g. [0, None]\nto scale from 0 to the maximum to value.\n\n\n \n \n x_label_angle: float\n\ndefault `= 0`\n\nThe angle of the x-axis labels in degrees offset clockwise.\n\n\n \n \n y_label_angle: float\n\ndefault `= 0`\n\nThe angle of the y-axis labels in degrees offset clockwise.\n\n\n \n \n x_axis_labels_visible: bool | Literal['hidden']\n\ndefault `= True`\n\nWhether the x-axis labels should be visible. Can be hidden when many x-axis\nlabels are present.\n\n\n \n \n caption: str | I18nData | None\n\ndefault `= None`\n\nThe (optional) caption to display below the plot.\n\n\n \n \n sort: Literal['x', 'y', '-x', '-y'] | list[str] | None\n\ndefault `= None`\n\nThe sorting order of the x values, if x column is type string/category. Can be\n\"x\", \"y\", \"-x\", \"-y\", or list of strings that represent the order of the\ncategories.\n\n\n \n \n tooltip: Literal['axis', 'none', 'all'] | list[str]\n\ndefault `= \"axis\"`\n\nThe tooltip to display when hovering on a point. \"axis\" shows the values for\nthe axis columns, \"all\" shows all column values, and \"none\" shows no tooltips.\nCan also provide a list of strings representing columns to show in the\ntooltip, which will be displayed along with axis values.\n\n\n \n \n height: int | None\n\ndefault `= None`\n\nThe height of the plot in pixels.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nThe (optional) label ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "ed along with axis values.\n\n\n \n \n height: int | None\n\ndefault `= None`\n\nThe height of the plot in pixels.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nThe (optional) label to display on the top left corner of the plot.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nWhether the label should be displayed.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | Set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nWhether the plot should be visible.\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are a", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "signed as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n buttons: list[Literal['fullscreen', 'export']] | None\n\ndefault `= None`\n\nA list of buttons to show for the component. Valid options are \"fullscreen\"\nand \"export\". The \"fullscreen\" button allows the user to view the plot in\nfullscreen mode. The \"export\" button allows the user to export and download\nthe current view of the plot as a PNG image. By default, no buttons are shown.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.BarPlot`| \"barplot\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "bar_plot_demo\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe BarPlot component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`BarPlot.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or\ndeselects the NativePlot. Uses event data gradio.SelectData to carry `value`\nreferring to the label of the NativePlot, and `selected` to refer to state of\nthe NativePlot. See EventData documentation on how to use this event data \n`BarPlot.double_click(fn, \u00b7\u00b7\u00b7)`| Triggered when the NativePlot is double\nclicked. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "st.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "ues for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "t arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "ion be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/barplot", "source_page_title": "Gradio - Barplot Docs"}, {"text": "Button that clears the value of a component or a list of components when\nclicked. It is instantiated with the list of components to clear.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/clearbutton", "source_page_title": "Gradio - Clearbutton Docs"}, {"text": "**As input component** : (Rarely used) the `str` corresponding to the\nbutton label when the button is clicked\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: str | None\n )\n \t...\n\n \n\n**As output component** : string corresponding to the button label\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> str | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/clearbutton", "source_page_title": "Gradio - Clearbutton Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n components: None | list[Component] | Component\n\ndefault `= None`\n\n\n \n \n value: str\n\ndefault `= \"Clear\"`\n\ndefault text for the button to display. If a function is provided, the\nfunction will be called each time the app loads to set the initial value of\nthis component.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\ncontinuously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\ncomponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n variant: Literal['primary', 'secondary', 'stop']\n\ndefault `= \"secondary\"`\n\nsets the background and text color of the button. Use 'primary' for main call-\nto-action buttons, 'secondary' for a more subdued style, 'stop' for a stop\nbutton, 'huggingface' for a black background with white text, consistent with\nHugging Face's button styles.\n\n\n \n \n size: Literal['sm', 'md', 'lg']\n\ndefault `= \"lg\"`\n\nsize of the button. Can be \"sm\", \"md\", or \"lg\".\n\n\n \n \n icon: str | Path | None\n\ndefault `= None`\n\nURL or path to the icon file to display within the button. If None, no icon\nwill be displayed.\n\n\n \n \n link: str | None\n\ndefault `= None`\n\nURL to open when the button is clicked. If None, no link will be used.\n\n\n \n \n link_target: Literal['_self', '_blank', '_parent', '_top']\n\ndefault `= \"_self\"`\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n interactive: bool\n\ndefault `= True`\n\nif False, the Button will be in a disabled state.\n\n\n ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/clearbutton", "source_page_title": "Gradio - Clearbutton Docs"}, {"text": "ent will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n interactive: bool\n\ndefault `= True`\n\nif False, the Button will be in a disabled state.\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nan optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nan optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nif False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int | None\n\ndefault `= None`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrowe", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/clearbutton", "source_page_title": "Gradio - Clearbutton Docs"}, {"text": "\n \n min_width: int | None\n\ndefault `= None`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"undocumented\"`\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/clearbutton", "source_page_title": "Gradio - Clearbutton Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.ClearButton`| \"clearbutton\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/clearbutton", "source_page_title": "Gradio - Clearbutton Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe ClearButton component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`ClearButton.add(fn, \u00b7\u00b7\u00b7)`| Adds a component or list of components to the list\nof components that will be cleared when the button is clicked. \n`ClearButton.click(fn, \u00b7\u00b7\u00b7)`| Triggered when the Button is clicked. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n components: None | Component | list[Component]\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/clearbutton", "source_page_title": "Gradio - Clearbutton Docs"}, {"text": "Creates a checkbox that can be set to `True` or `False`. Can be used as an\ninput to pass a boolean value to a function or as an output to display a\nboolean value. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "**As input component** : Passes the status of the checkbox as a `bool`.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: bool | None\n )\n \t...\n\n \n\n**As output component** : Expects a `bool` value that is set as the status\nof the checkbox\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> bool | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: bool | Callable\n\ndefault `= False`\n\nif True, checked by default. If a function is provided, the function will be\ncalled each time the app loads to set the initial value of this component.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this checkbox, displayed to the right of the checkbox if\n`show_label` is `True`.\n\n\n \n \n info: str | I18nData | None\n\ndefault `= None`\n\nadditional component description, appears below the label in smaller font.\nSupports markdown / HTML syntax.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nif True, this checkbox can be ch", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "e results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nif True, this checkbox can be checked; if False, checking will be disabled. If\nnot provided, this is inferred based on whether the component is used as an\ninput or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of gr.Button() instances to show in the top right corner of the\ncomponent. Custom buttons will ap", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "es\nprovided during constructor.\n\n\n \n \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of gr.Button() instances to show in the top right corner of the\ncomponent. Custom buttons will appear in the toolbar with their configured\nicon and/or label, and clicking them will trigger any .click() events\nregistered on the button.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Checkbox`| \"checkbox\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "sentence_builderhello_world_3\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Checkbox component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Checkbox.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Checkbox changes\neither because of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Checkbox.input(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user changes\nthe value of the Checkbox. \n`Checkbox.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or\ndeselects the Checkbox. Uses event data gradio.SelectData to carry `value`\nreferring to the label of the Checkbox, and `selected` to refer to state of\nthe Checkbox. See EventData documentation on how to use this event data \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | Non", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "o use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\nd", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "t to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main functio", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": " identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/checkbox", "source_page_title": "Gradio - Checkbox Docs"}, {"text": "Creates a set of (string or numeric type) radio buttons of which only one\ncan be selected. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "**As input component** : Passes the value of the selected radio button as a `str | int | float`, or its index as an `int` into the function, depending on `type`.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: str | int | float | None\n )\n \t...\n\n \n\n**As output component** : Expects a `str | int | float` corresponding to the value of the radio button to be selected\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> str | int | float | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n choices: list[str | int | float | tuple[str, str | int | float]] | None\n\ndefault `= None`\n\nA list of string or numeric options to select from. An option can also be a\ntuple of the form (name, value), where name is the displayed name of the radio\nbutton and value is the value to be passed to the function, or returned by the\nfunction.\n\n\n \n \n value: str | int | float | Callable | None\n\ndefault `= None`\n\nThe option selected by default. If None, no option is selected by default. If\na function is provided, the function will be called each time the app loads to\nset the initial value of this component.\n\n\n \n \n type: Literal['value', 'index']\n\ndefault `= \"value\"`\n\nType of value to be returned by component. \"value\" returns the string of the\nchoice selected, \"index\" returns the index of the choice selected.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component, displayed above the component if `show_label` is\n`True` and is also used as the header if there are a table of examples for\nthis component. If None and used in a `gr.Interface`, the label will be the\nname of the parameter this component corresponds to.\n\n\n \n \n info: str | I18nData | None\n\ndefault `= None`\n\nadditional component description, appears below the label in smaller font.\nSupports markdown / HTML syntax.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nRelative width compared to adjacent Components in a Row. For example, if\nComponent A has scale=2, and Component B has scale=1, A will be twice as wide\nas B. Should be an integer.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nMinimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nIf True, choices in this radio group will be selectable; if False, selection\nwill be disabled. If not provided, this is inferred based on whether the\ncomponent is used as an input or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Compon", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n rtl: bool\n\ndefault `= False`\n\nIf True, the radio buttons will be displayed in right-to-left order. Default\nis False.\n\n\n \n \n buttons: list[Button] | None\n\ndefault `= None`\n\nA list of gr.Button() instances to show in the top right corner of the\ncomponent. Custom buttons will appear in the toolbar with their configured\nicon and/or label, and clicking them will trigger any .click() events\nregistered on the button.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Radio`| \"radio\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "sentence_builderblocks_essay\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Radio component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Radio.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or deselects\nthe Radio. Uses event data gradio.SelectData to carry `value` referring to the\nlabel of the Radio, and `selected` to refer to state of the Radio. See\nEventData documentation on how to use this event data \n`Radio.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Radio changes either\nbecause of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Radio.input(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user changes the\nvalue of the Radio. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList o", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "ction takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "ed.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str |", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "submissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis functi", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "ntical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/radio", "source_page_title": "Gradio - Radio Docs"}, {"text": "A Gradio Interface includes a \u2018Flag\u2019 button that appears underneath the\noutput. By default, clicking on the Flag button sends the input and output\ndata back to the machine where the gradio demo is running, and saves it to a\nCSV log file. But this default behavior can be changed. To set what happens\nwhen the Flag button is clicked, you pass an instance of a subclass of\n_FlaggingCallback_ to the _flagging_callback_ parameter in the _Interface_\nconstructor. You can use one of the _FlaggingCallback_ subclasses that are\nlisted below, or you can create your own, which lets you do whatever you want\nwith the data that is being flagged.\n\nSimpleCSVLogger\n\n \n \n gradio.SimpleCSVLogger(\u00b7\u00b7\u00b7)\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/flagging", "source_page_title": "Gradio - Flagging Docs"}, {"text": "A simplified implementation of the FlaggingCallback abstract class provided\nfor illustrative purposes. Each flagged sample (both the input and output\ndata) is logged to a CSV file on the machine running the gradio app.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/flagging", "source_page_title": "Gradio - Flagging Docs"}, {"text": "import gradio as gr\n def image_classifier(inp):\n return {'cat': 0.3, 'dog': 0.7}\n demo = gr.Interface(fn=image_classifier, inputs=\"image\", outputs=\"label\",\n flagging_callback=SimpleCSVLogger())\n\nCSVLogger\n\n \n \n gradio.CSVLogger(\u00b7\u00b7\u00b7)\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/flagging", "source_page_title": "Gradio - Flagging Docs"}, {"text": "The default implementation of the FlaggingCallback abstract class in\ngradio>=5.0. Each flagged sample (both the input and output data) is logged to\na CSV file with headers on the machine running the gradio app. Unlike\nClassicCSVLogger, this implementation is concurrent-safe and it creates a new\ndataset file every time the headers of the CSV (derived from the labels of the\ncomponents) change. It also only creates columns for \"username\" and \"flag\" if\nthe flag_option and username are provided, respectively. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/flagging", "source_page_title": "Gradio - Flagging Docs"}, {"text": "import gradio as gr\n def image_classifier(inp):\n return {'cat': 0.3, 'dog': 0.7}\n demo = gr.Interface(fn=image_classifier, inputs=\"image\", outputs=\"label\",\n flagging_callback=CSVLogger())\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/flagging", "source_page_title": "Gradio - Flagging Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n simplify_file_data: bool\n\ndefault `= True`\n\nIf True, the file data will be simplified before being written to the CSV\nfile. If CSVLogger is being used to cache examples, this is set to False to\npreserve the original FileData class\n\n\n \n \n verbose: bool\n\ndefault `= True`\n\nIf True, prints messages to the console about the dataset file creation\n\n\n \n \n dataset_file_name: str | None\n\ndefault `= None`\n\nThe name of the dataset file to be created (should end in \".csv\"). If None,\nthe dataset file will be named \"dataset1.csv\" or the next available number.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/flagging", "source_page_title": "Gradio - Flagging Docs"}, {"text": "The render decorator allows Gradio Blocks apps to have dynamic layouts, so\nthat the components and event listeners in your app can change depending on\ncustom logic. Attaching a @gr.render decorator to a function will cause the\nfunction to be re-run whenever the inputs are changed (or specified triggers\nare activated). The function contains the components and event listeners that\nwill update based on the inputs. \nThe basic usage of @gr.render is as follows: \n1\\. Create a function and attach the @gr.render decorator to it. \n2\\. Add the input components to the `inputs=` argument of @gr.render, and\ncreate a corresponding argument in your function for each component. \n3\\. Add all components inside the function that you want to update based on\nthe inputs. Any event listeners that use these components should also be\ninside this function. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/render", "source_page_title": "Gradio - Render Docs"}, {"text": "import gradio as gr\n \n with gr.Blocks() as demo:\n input_text = gr.Textbox()\n \n @gr.render(inputs=input_text)\n def show_split(text):\n if len(text) == 0:\n gr.Markdown(\"No Input Provided\")\n else:\n for letter in text:\n with gr.Row():\n text = gr.Textbox(letter)\n btn = gr.Button(\"Clear\")\n btn.click(lambda: gr.Textbox(value=\"\"), None, text)\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/render", "source_page_title": "Gradio - Render Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n inputs: list[Component] | Component | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n triggers: list[EventListenerCallable] | EventListenerCallable | None\n\ndefault `= None`\n\nList of triggers to listen to, e.g. [btn.click, number.change]. If None, will\nlisten to changes to any inputs.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= \"always_last\"`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= None`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/render", "source_page_title": "Gradio - Render Docs"}, {"text": "Creates a chatbot that displays user-submitted messages and responses.\nSupports a subset of Markdown including bold, italics, code, tables. Also\nsupports audio/video/image files, which are displayed in the Chatbot, and\nother kinds of files which are displayed as links. This component is usually\nused as an output component. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "The data format accepted by the Chatbot is dictated by the `type` parameter.\nThis parameter can take two values, `'tuples'` and `'messages'`. The\n`'tuples'` type is deprecated and will be removed in a future version of\nGradio.\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "If the `type` is `'messages'`, then the data sent to/from the chatbot will be\na list of dictionaries with `role` and `content` keys. This format is\ncompliant with the format expected by most LLM APIs (HuggingChat, OpenAI,\nClaude). The `role` key is either `'user'` or `'assistant'` and the `content`\nkey can be one of the following should be a string (rendered as markdown/html)\nor a Gradio component (useful for displaying files).\n\nAs an example:\n\n \n \n import gradio as gr\n \n history = [\n {\"role\": \"assistant\", \"content\": \"I am happy to provide you that report and plot.\"},\n {\"role\": \"assistant\", \"content\": gr.Plot(value=make_plot_from_file('quaterly_sales.txt'))}\n ]\n \n with gr.Blocks() as demo:\n gr.Chatbot(history)\n \n demo.launch()\n\nFor convenience, you can use the `ChatMessage` dataclass so that your text\neditor can give you autocomplete hints and typechecks.\n\n \n \n import gradio as gr\n \n history = [\n gr.ChatMessage(role=\"assistant\", content=\"How can I help you?\"),\n gr.ChatMessage(role=\"user\", content=\"Can you make me a plot of quarterly sales?\"),\n gr.ChatMessage(role=\"assistant\", content=\"I am happy to provide you that report and plot.\")\n ]\n \n with gr.Blocks() as demo:\n gr.Chatbot(history)\n \n demo.launch()\n\n", "heading1": "Message format", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: list[MessageDict | Message] | Callable | None\n\ndefault `= None`\n\nDefault list of messages to show in chatbot, where each message is of the\nformat {\"role\": \"user\", \"content\": \"Help me.\"}. Role can be one of \"user\",\n\"assistant\", or \"system\". Content should be either text, or media passed as a\nGradio component, e.g. {\"content\": gr.Image(\"lion.jpg\")}. If a function is\nprovided, the function will be called each time the app loads to set the\ninitial value of this component.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component. Appears above the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "ide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n autoscroll: bool\n\ndefault `= True`\n\nIf True, will automatically scroll to the bottom of the textbox when the value\nchanges, unless the user scrolls up. If False, will not scroll to the bottom\nof the textbox when the value changes.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they hav", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n height: int | str | None\n\ndefault `= 400`\n\nThe height of the component, specified in pixels if a number is passed, or in\nCSS units if a string is passed. If messages exceed the height, the component\nwill scroll.\n\n\n \n \n resizable: bool\n\ndefault `= False`\n\nIf True, the user of the Gradio app can resize the chatbot by dragging the\nbottom right corner.\n\n\n \n \n max_height: int | str | None\n\ndefault `= None`\n\nThe maximum height of the component, specified in pixels if a number is\npassed, or in CSS units if a string is passed. If messages exceed the height,\nthe component will scroll. If messages are shorter than the height, the\ncomponent will shrink to fit the content. Will not have any effect if `height`\nis set and is smaller than `max_height`.\n\n\n \n \n min_height: int | str | None\n\ndefault `= None`\n\nThe minimum height of the component, specified in pixels if a number is\npassed, or in CSS units if a string is passed. If messages exceed the height,\nthe component will expand to fit the content. Will not have any effect if\n`height` is set and is larger than `min_height`.\n\n\n \n \n editable: Literal['user', 'all'] | None\n\ndefault `= None`\n\nAllows user to edit messages in the chatbot. If set to \"user\", allows editing\nof user messages. If set to \"all\", allows editing of assistant messages as\nwell.\n\n\n \n \n latex_delimiters: list[dict[str, str | bool]] | None\n\ndefault `= None`\n\nA list of dicts of the form {\"left\": open delimiter (str), \"right\": close\ndelimiter (str), \"display\": whether to display in newline (bool)} that will be\nused to render LaTeX expressions. If not provided, `latex_delimiters` is set\nto `[{ \"", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "pen delimiter (str), \"right\": close\ndelimiter (str), \"display\": whether to display in newline (bool)} that will be\nused to render LaTeX expressions. If not provided, `latex_delimiters` is set\nto `[{ \"left\": \"$$\", \"right\": \"$$\", \"display\": True }]`, so only expressions\nenclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass\nin an empty list to disable LaTeX rendering. For more information, see the\n[KaTeX documentation](https://katex.org/docs/autorender.html).\n\n\n \n \n rtl: bool\n\ndefault `= False`\n\nIf True, sets the direction of the rendered text to right-to-left. Default is\nFalse, which renders text left-to-right.\n\n\n \n \n buttons: list[Literal['share', 'copy', 'copy_all'] | Button] | None\n\ndefault `= None`\n\nA list of buttons to show in the top right corner of the component. Valid\noptions are \"share\", \"copy\", \"copy_all\", or a gr.Button() instance. The\n\"share\" button allows the user to share outputs to Hugging Face Spaces\nDiscussions. The \"copy\" button makes a copy button appear next to each\nindividual chatbot message. The \"copy_all\" button appears at the component\nlevel and allows the user to copy all chatbot messages. Custom gr.Button()\ninstances will appear in the toolbar with their configured icon and/or label,\nand clicking them will trigger any .click() events registered on the button.\nBy default, \"share\" and \"copy_all\" buttons are shown.\n\n\n \n \n watermark: str | None\n\ndefault `= None`\n\nIf provided, this text will be appended to the end of messages copied from the\nchatbot, after a blank line. Useful for indicating that the message is\ngenerated by an AI model.\n\n\n \n \n avatar_images: tuple[str | Path | None, str | Path | None] | None\n\ndefault `= None`\n\nTuple of two avatar image paths or URLs for user and bot (in that order). Pass\nNone for either the user or bot image to skip. Must be within the working\ndirectory of the Gradio app or an external URL.\n\n\n \n \n sanitize_html: bool\n\ndefault `= True`\n\nIf False, w", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "rder). Pass\nNone for either the user or bot image to skip. Must be within the working\ndirectory of the Gradio app or an external URL.\n\n\n \n \n sanitize_html: bool\n\ndefault `= True`\n\nIf False, will disable HTML sanitization for chatbot messages. This is not\nrecommended, as it can lead to security vulnerabilities.\n\n\n \n \n render_markdown: bool\n\ndefault `= True`\n\nIf False, will disable Markdown rendering for chatbot messages.\n\n\n \n \n feedback_options: list[str] | tuple[str, ...] | None\n\ndefault `= ('Like', 'Dislike')`\n\nA list of strings representing the feedback options that will be displayed to\nthe user. The exact case-sensitive strings \"Like\" and \"Dislike\" will render as\nthumb icons, but any other choices will appear under a separate flag icon.\n\n\n \n \n feedback_value: list[str | None] | None\n\ndefault `= None`\n\nA list of strings representing the feedback state for entire chat. Only works\nwhen type=\"messages\". Each entry in the list corresponds to that assistant\nmessage, in order, and the value is the feedback given (e.g. \"Like\",\n\"Dislike\", or any custom feedback option) or None if no feedback was given for\nthat message.\n\n\n \n \n line_breaks: bool\n\ndefault `= True`\n\nIf True (default), will enable Github-flavored Markdown line breaks in chatbot\nmessages. If False, single new lines will be ignored. Only applies if\n`render_markdown` is True.\n\n\n \n \n layout: Literal['panel', 'bubble'] | None\n\ndefault `= None`\n\nIf \"panel\", will display the chatbot in a llm style layout. If \"bubble\", will\ndisplay the chatbot with message bubbles, with the user and bot messages on\nalterating sides. Will default to \"bubble\".\n\n\n \n \n placeholder: str | None\n\ndefault `= None`\n\na placeholder message to display in the chatbot when it is empty. Centered\nvertically and horizontally in the Chatbot. Supports Markdown and HTML. If\nNone, no placeholder is displayed.\n\n\n \n \n examples: list[ExampleMessage] | None\n\ndefault `= None`\n\nA list of ex", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "ered\nvertically and horizontally in the Chatbot. Supports Markdown and HTML. If\nNone, no placeholder is displayed.\n\n\n \n \n examples: list[ExampleMessage] | None\n\ndefault `= None`\n\nA list of example messages to display in the chatbot before any user/assistant\nmessages are shown. Each example should be a dictionary with an optional\n\"text\" key representing the message that should be populated in the Chatbot\nwhen clicked, an optional \"files\" key, whose value should be a list of files\nto populate in the Chatbot, an optional \"icon\" key, whose value should be a\nfilepath or URL to an image to display in the example box, and an optional\n\"display_text\" key, whose value should be the text to display in the example\nbox. If \"display_text\" is not provided, the value of \"text\" will be displayed.\n\n\n \n \n allow_file_downloads: \n\ndefault `= True`\n\nIf True, will show a download button for chatbot messages that contain media.\nDefaults to True.\n\n\n \n \n group_consecutive_messages: bool\n\ndefault `= True`\n\nIf True, will display consecutive messages from the same role in the same\nbubble. If False, will display each message in a separate bubble. Defaults to\nTrue.\n\n\n \n \n allow_tags: list[str] | bool\n\ndefault `= True`\n\nIf a list of tags is provided, these tags will be preserved in the output\nchatbot messages, even if `sanitize_html` is `True`. For example, if this list\nis [\"thinking\"], the tags `` and `` will not be removed.\nIf True, all custom tags (non-standard HTML tags) will be preserved. If False,\nno tags will be preserved. Default value is 'True'.\n\n\n \n \n reasoning_tags: list[tuple[str, str]] | None\n\ndefault `= None`\n\nIf provided, a list of tuples of (open_tag, close_tag) strings. Any text\nbetween these tags will be extracted and displayed in a separate collapsible\nmessage with metadata={\"title\": \"Reasoning\"}. For example, [(\"\",\n\"\")] will extract content between and \",\n\"\")] will extract content between and tags.\nEach thinking block will be displayed as a separate collapsible message before\nthe main response. If None (default), no automatic extraction is performed.\n\n\n \n \n like_user_message: bool\n\ndefault `= False`\n\nIf True, will show like/dislike buttons for user messages as well. Defaults to\nFalse.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Chatbot`| \"chatbot\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "**Displaying Thoughts/Tool Usage**\n\nWhen `type` is `messages`, you can provide additional metadata regarding any\ntools used to generate the response. This is useful for displaying the thought\nprocess of LLM agents. For example,\n\n \n \n def generate_response(history):\n history.append(\n ChatMessage(role=\"assistant\",\n content=\"The weather API says it is 20 degrees Celcius in New York.\",\n metadata={\"title\": \"\ud83d\udee0\ufe0f Used tool Weather API\"})\n )\n return history\n\nWould be displayed as following:\n\n![Gradio chatbot tool display](https://github.com/user-\nattachments/assets/c1514bc9-bc29-4af1-8c3f-cd4a7c2b217f)\n\nYou can also specify metadata with a plain python dictionary,\n\n \n \n def generate_response(history):\n history.append(\n dict(role=\"assistant\",\n content=\"The weather API says it is 20 degrees Celcius in New York.\",\n metadata={\"title\": \"\ud83d\udee0\ufe0f Used tool Weather API\"})\n )\n return history\n\n**Using Gradio Components Inside`gr.Chatbot`**\n\nThe `Chatbot` component supports using many of the core Gradio components\n(such as `gr.Image`, `gr.Plot`, `gr.Audio`, and `gr.HTML`) inside of the\nchatbot. Simply include one of these components in your list of tuples. Here\u2019s\nan example:\n\n \n \n import gradio as gr\n \n def load():\n return [\n (\"Here's an audio\", gr.Audio(\"https://github.com/gradio-app/gradio/raw/main/gradio/media_assets/audio/audio_sample.wav\")),\n (\"Here's an video\", gr.Video(\"https://github.com/gradio-app/gradio/raw/main/gradio/media_assets/videos/world.mp4\"))\n ]\n \n with gr.Blocks() as demo:\n chatbot = gr.Chatbot()\n button = gr.Button(\"Load audio and video\")\n button.click(load, None, chatbot)\n \n demo.launch()\n\n", "heading1": "Examples", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "chatbot_simplechatbot_streamingchatbot_with_toolschatbot_core_components\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Chatbot component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Chatbot.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Chatbot changes\neither because of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Chatbot.select(fn, \u00b7\u00b7\u00b7)`| Event listener for when the user selects or\ndeselects the Chatbot. Uses event data gradio.SelectData to carry `value`\nreferring to the label of the Chatbot, and `selected` to refer to state of the\nChatbot. See EventData documentation on how to use this event data \n`Chatbot.like(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nlikes/dislikes from within the Chatbot. This event has EventData of type\ngradio.LikeData that carries information, accessible through LikeData.index\nand LikeData.value. See EventData documentation on how to use this event data. \n`Chatbot.retry(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user clicks the\nretry button in the chatbot message. \n`Chatbot.undo(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user clicks the\nundo button in the chatbot message. \n`Chatbot.example_select(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nclicks on an example from within the Chatbot. This event has SelectData of\ntype gradio.SelectData that carries information, accessible through\nSelectData.index and SelectData.value. See SelectData documentation on how to\nuse this event data. \n`Chatbot.option_select(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "s information, accessible through\nSelectData.index and SelectData.value. See SelectData documentation on how to\nuse this event data. \n`Chatbot.option_select(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user\nclicks on an option from within the Chatbot. This event has SelectData of type\ngradio.SelectData that carries information, accessible through\nSelectData.index and SelectData.value. See SelectData documentation on how to\nuse this event data. \n`Chatbot.clear(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user clears the\nChatbot using the clear button for the component. \n`Chatbot.copy(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user copies\ncontent from the Chatbot. Uses event data gradio.CopyData to carry information\nabout the copied content. See EventData documentation on how to use this event\ndata \n`Chatbot.edit(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user edits the\nChatbot (e.g. image) using the built-in editor. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a strin", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "e as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*re", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "fault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each inpu", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": " with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "", "heading1": "Helper Classes", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "gradio.ChatMessage(\u00b7\u00b7\u00b7)\n\nDescription\n\nA dataclass that represents a message in the Chatbot component (with\ntype=\"messages\"). The only required field is `content`. The value of\n`gr.Chatbot` is a list of these dataclasses.\n\nParameters \u25bc\n\n\n \n \n content: MessageContent | list[MessageContent]\n\nThe content of the message. Can be a string, a file dict, a gradio component,\nor a list of these types to group these messages together.\n\n\n \n \n role: Literal['user', 'assistant', 'system']\n\ndefault `= \"assistant\"`\n\nThe role of the message, which determines the alignment of the message in the\nchatbot. Can be \"user\", \"assistant\", or \"system\". Defaults to \"assistant\".\n\n\n \n \n metadata: MetadataDict\n\ndefault `= _HAS_DEFAULT_FACTORY_CLASS()`\n\nThe metadata of the message, which is used to display intermediate thoughts /\ntool usage. Should be a dictionary with the following keys: \"title\" (required\nto display the thought), and optionally: \"id\" and \"parent_id\" (to nest\nthoughts), \"duration\" (to display the duration of the thought), \"status\" (to\ndisplay the status of the thought).\n\n\n \n \n options: list[OptionDict]\n\ndefault `= _HAS_DEFAULT_FACTORY_CLASS()`\n\nThe options of the message. A list of Option objects, which are dictionaries\nwith the following keys: \"label\" (the text to display in the option), and\noptionally \"value\" (the value to return when the option is selected if\ndifferent from the label).\n\n", "heading1": "ChatMessage", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "A typed dictionary to represent metadata for a message in the Chatbot\ncomponent. An instance of this dictionary is used for the `metadata` field in\na ChatMessage when the chat message should be displayed as a thought.\n\nKeys \u25bc\n\n\n \n \n title: str\n\nThe title of the 'thought' message. Only required field.\n\n\n \n \n id: int | str\n\nThe ID of the message. Only used for nested thoughts. Nested thoughts can be\nnested by setting the parent_id to the id of the parent thought.\n\n\n \n \n parent_id: int | str\n\nThe ID of the parent message. Only used for nested thoughts.\n\n\n \n \n log: str\n\nA string message to display next to the thought title in a subdued font.\n\n\n \n \n duration: float\n\nThe duration of the message in seconds. Appears next to the thought title in a\nsubdued font inside a parentheses.\n\n\n \n \n status: Literal['pending', 'done']\n\nif set to `'pending'`, a spinner appears next to the thought title and the\naccordion is initialized open. If `status` is `'done'`, the thought accordion\nis initialized closed. If `status` is not provided, the thought accordion is\ninitialized open and no spinner is displayed.\n\n", "heading1": "MetadataDict", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "A typed dictionary to represent an option in a ChatMessage. A list of these\ndictionaries is used for the `options` field in a ChatMessage.\n\nKeys \u25bc\n\n\n \n \n value: str\n\nThe value to return when the option is selected.\n\n\n \n \n label: str\n\nThe text to display in the option, if different from the value.\n\n", "heading1": "OptionDict", "source_page_url": "https://gradio.app/docs/gradio/chatbot", "source_page_title": "Gradio - Chatbot Docs"}, {"text": "Special component that ticks at regular intervals when active. It is not\nvisible, and only used to trigger events at a regular interval through the\n`tick` event listener.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "**As input component** : The interval of the timer as a float.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: float | None\n )\n \t...\n\n \n\n**As output component** : The interval of the timer as a float or None.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> float | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: float\n\ndefault `= 1`\n\nInterval in seconds between each tick.\n\n\n \n \n active: bool\n\ndefault `= True`\n\nWhether the timer is active.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Timer`| \"timer\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Timer component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Timer.tick(fn, \u00b7\u00b7\u00b7)`| This listener is triggered at regular intervals defined\nby the Timer. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "oint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"hidden\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, w", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "atch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurren", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "usly. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/timer", "source_page_title": "Gradio - Timer Docs"}, {"text": "Used to display arbitrary JSON output prettily. As this component does not\naccept user input, it is rarely used as an input component. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "**As input component** : Passes the JSON value as a `dict` or `list`\ndepending on the value.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: dict | list | None\n )\n \t...\n\n \n\n**As output component** : Expects a valid JSON `str` \\-- or a `list` or\n`dict` that can be serialized to a JSON string. The `list` or `dict` value can\ncontain numpy arrays.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> dict | list | str | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: str | dict | list | Callable | None\n\ndefault `= None`\n\nDefault value as a valid JSON `str` -- or a `list` or `dict` that can be\nserialized to a JSON string. If a function is provided, the function will be\ncalled each time the app loads to set the initial value of this component.\n\n\n \n \n label: str | I18nData | None\n\ndefault `= None`\n\nthe label for this component. Appears above the component and is also used as\nthe header if there are a table of examples for this component. If None and\nused in a `gr.Interface`, the label will be the name of the parameter this\ncomponent is assigned to.\n\n\n \n \n every: Timer | float | None\n\ndefault `= None`\n\nContinously calls `value` to recalculate it if `value` is a function (has no\neffect otherwise). Can provide a Timer whose tick resets `value`, or a float\nthat provides the regular interval for the reset Timer.\n\n\n \n \n inputs: Component | list[Component] | set[Component] | None\n\ndefault `= None`\n\nComponents that are used as inputs to calculate `value` if `value` is a\nfunction (has no effect otherwise). `value` is recalculated any time the\ninputs change.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nIf True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "rap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nin a gr.render, Components with the same key across re-renders are treated as\nthe same component, not a new component. Properties set in 'preserved_by_key'\nare not reset across a re-render.\n\n\n \n \n preserved_by_key: list[str] | str | None\n\ndefault `= \"value\"`\n\nA list of parameters from this component's constructor. Inside a gr.render()\nfunction, if a component is re-rendered with the same key, these (and only\nthese) parameters will be preserved in the UI (if they have been changed by\nthe user or an event listener) instead of re-rendered based on the values\nprovided during constructor.\n\n\n \n \n open: bool\n\ndefault `= False`\n\nIf True, all JSON nodes will be expanded when rendered. By default, node\nlevels deeper than 3 are collapsed.\n\n\n \n \n show_indices: bool\n\ndefault `= False`\n\nWhether to show numerical indices when displaying the elements of a list\nwithin the JSON object.\n\n\n \n ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "\nlevels deeper than 3 are collapsed.\n\n\n \n \n show_indices: bool\n\ndefault `= False`\n\nWhether to show numerical indices when displaying the elements of a list\nwithin the JSON object.\n\n\n \n \n height: int | str | None\n\ndefault `= None`\n\nHeight of the JSON component in pixels if a number is passed, or in CSS units\nif a string is passed. Overflow will be scrollable. If None, the height will\nbe automatically adjusted to fit the content.\n\n\n \n \n max_height: int | str | None\n\ndefault `= 500`\n\n\n \n \n min_height: int | str | None\n\ndefault `= None`\n\n\n \n \n buttons: list[Literal['copy'] | Button] | None\n\ndefault `= None`\n\nA list of buttons to show for the component. Valid options are \"copy\" or a\ngr.Button() instance. The \"copy\" button allows users to copy the JSON to the\nclipboard. Custom gr.Button() instances will appear in the toolbar with their\nconfigured icon and/or label, and clicking them will trigger any .click()\nevents registered on the button. By default, the copy button is shown.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.JSON`| \"json\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "zip_to_jsonblocks_xray\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe JSON component supports the following event listeners. Each event listener\ntakes the same parameters, which are listed in the Event Parameters table\nbelow.\n\nListener| Description \n---|--- \n`JSON.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the JSON changes either\nbecause of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the A", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "ppears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n ", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimul", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "t of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/json", "source_page_title": "Gradio - Json Docs"}, {"text": "This function allows you to pass custom warning messages to the user. You\ncan do so simply by writing `gr.Warning('message here')` in your function, and\nwhen that line is executed the custom message will appear in a modal on the\ndemo. The modal is yellow by default and has the heading: \"Warning.\" Queue\nmust be enabled for this behavior; otherwise, the warning will be printed to\nthe console using the `warnings` library.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/warning", "source_page_title": "Gradio - Warning Docs"}, {"text": "import gradio as gr\n def hello_world():\n gr.Warning('This is a warning message.')\n return \"hello world\"\n with gr.Blocks() as demo:\n md = gr.Markdown()\n demo.load(hello_world, inputs=None, outputs=[md])\n demo.queue().launch()\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/warning", "source_page_title": "Gradio - Warning Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n message: str\n\ndefault `= \"Warning issued.\"`\n\nThe warning message to be displayed to the user. Can be HTML, which will be\nrendered in the modal.\n\n\n \n \n duration: float | None\n\ndefault `= 10`\n\nThe duration in seconds that the warning message should be displayed for. If\nNone or 0, the message will be displayed indefinitely until the user closes\nit.\n\n\n \n \n visible: bool\n\ndefault `= True`\n\nWhether the error message should be displayed in the UI.\n\n\n \n \n title: str\n\ndefault `= \"Warning\"`\n\nThe title to be displayed to the user at the top of the modal.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/warning", "source_page_title": "Gradio - Warning Docs"}, {"text": "blocks_chained_events\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/warning", "source_page_title": "Gradio - Warning Docs"}, {"text": "The gr.DeletedFileData class is a subclass of gr.EventData that\nspecifically carries information about the `.delete()` event. When\ngr.DeletedFileData is added as a type hint to an argument of an event listener\nmethod, a gr.DeletedFileData object will automatically be passed as the value\nof that argument. The attributes of this object contains information about the\nevent that triggered the listener.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/deletedfiledata", "source_page_title": "Gradio - Deletedfiledata Docs"}, {"text": "import gradio as gr\n \n def test(delete_data: gr.DeletedFileData):\n return delete_data.file.path\n \n with gr.Blocks() as demo:\n files = gr.File(file_count=\"multiple\")\n deleted_file = gr.File()\n files.delete(test, None, deleted_file)\n \n demo.launch()\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/deletedfiledata", "source_page_title": "Gradio - Deletedfiledata Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n file: FileData\n\nThe file that was deleted, as a FileData object. The str path to the file can\nbe retrieved with the .path attribute.\n\n", "heading1": "Attributes", "source_page_url": "https://gradio.app/docs/gradio/deletedfiledata", "source_page_title": "Gradio - Deletedfiledata Docs"}, {"text": "file_component_events\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/deletedfiledata", "source_page_title": "Gradio - Deletedfiledata Docs"}, {"text": "Creates a Dialogue component for displaying or collecting multi-speaker\nconversations. This component can be used as input to allow users to enter\ndialogue involving multiple speakers, or as output to display diarized speech,\nsuch as the result of a transcription or speaker identification model. Each\nmessage can be associated with a specific speaker, making it suitable for use\ncases like conversations, interviews, or meetings. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "**As input component** : Returns the dialogue as a string or list of\ndictionaries.\n\nYour function should accept one of these types:\n\n \n \n def predict(\n \tvalue: tuple[str, list[tuple[str, str]]] | None\n )\n \t...\n\n \n\n**As output component** : Expects a string or a list of dictionaries of\ndialogue lines, where each dictionary contains 'speaker' and 'text' keys, or a\nstring.\n\nYour function should return one of these types:\n\n \n \n def predict(\u00b7\u00b7\u00b7) -> tuple[np.ndarray | PIL.Image.Image | str, list[tuple[np.ndarray | tuple[int, int, int, int], str]]] | None\n \t...\t\n \treturn value\n\n", "heading1": "Behavior", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n value: list[dict[str, str]] | Callable | None\n\ndefault `= None`\n\nValue of the dialogue. It is a list of dictionaries, each containing a\n'speaker' key and a 'text' key. If a function is provided, the function will\nbe called each time the app loads to set the initial value of this component.\n\n\n \n \n type: Literal['list', 'text']\n\ndefault `= \"text\"`\n\nThe type of the component, either \"list\" for a multi-speaker dialogue\nconsisting of dictionaries with 'speaker' and 'text' keys or \"text\" for a\nsingle text input. Defaults to \"text\".\n\n\n \n \n speakers: list[str] | None\n\ndefault `= None`\n\nThe different speakers allowed in the dialogue. If `None` or an empty list, no\nspeakers will be displayed. Instead, the component will be a standard textarea\nthat optionally supports `tags` autocompletion.\n\n\n \n \n formatter: Callable | None\n\ndefault `= None`\n\nA function that formats the dialogue line dictionary, e.g. {\"speaker\":\n\"Speaker 1\", \"text\": \"Hello, how are you?\"} into a string, e.g. \"Speaker 1:\nHello, how are you?\". This function is run on user input and the resulting\nstring is passed into the prediction function.\n\n\n \n \n unformatter: Callable | None\n\ndefault `= None`\n\nA function that parses a formatted dialogue string back into a dialogue line\ndictionary. Should take a single string line and return a dictionary with\n'speaker' and 'text' keys. If not provided, the default unformatter will\nattempt to parse the default formatter pattern.\n\n\n \n \n tags: list[str] | None\n\ndefault `= None`\n\nThe different tags allowed in the dialogue. Tags are displayed in an\nautocomplete menu below the input textbox when the user starts typing `:`. Use\nthe exact tag name expected by the AI model or inference function.\n\n\n \n \n separator: str\n\ndefault `= \" \"`\n\nThe separator between the different dialogue lines used to join the formatted\ndialogue lines into a single string. It should be unambiguous. For example, a\nnewline character", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "tor: str\n\ndefault `= \" \"`\n\nThe separator between the different dialogue lines used to join the formatted\ndialogue lines into a single string. It should be unambiguous. For example, a\nnewline character or tab character.\n\n\n \n \n color_map: dict[str, str] | None\n\ndefault `= None`\n\nA dictionary mapping speaker names to colors. The colors may be specified as\nhex codes or by their names. For example: {\"Speaker 1\": \"red\", \"Speaker 2\":\n\"FFEE22\"}. If not provided, default colors will be assigned to speakers. This\nis only used if `interactive` is False.\n\n\n \n \n label: str | None\n\ndefault `= \"Dialogue\"`\n\nthe label for this component, displayed above the component if `show_label` is\n`True` and is also used as the header if there are a table of examples for\nthis component. If None and used in a `gr.Interface`, the label will be the\nname of the parameter this component corresponds to.\n\n\n \n \n info: str | None\n\ndefault `= \"Type colon (:) in the dialogue line to see the available tags\"`\n\n\n \n \n placeholder: str | None\n\ndefault `= None`\n\nplaceholder hint to provide behind textarea.\n\n\n \n \n show_label: bool | None\n\ndefault `= None`\n\nif True, will display the label. If False, the copy button is hidden as well\nas well as the label.\n\n\n \n \n container: bool\n\ndefault `= True`\n\nif True, will place the component in a container - providing some extra\npadding around the border.\n\n\n \n \n scale: int | None\n\ndefault `= None`\n\nrelative size compared to adjacent Components. For example if Components A and\nB are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide\nas B. Should be an integer. scale applies in Rows, and to top-level Components\nin Blocks where fill_height=True.\n\n\n \n \n min_width: int\n\ndefault `= 160`\n\nminimum pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respec", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "um pixel width, will wrap if not sufficient screen space to satisfy this\nvalue. If a certain scale value results in this Component being narrower than\nmin_width, the min_width parameter will be respected first.\n\n\n \n \n interactive: bool | None\n\ndefault `= None`\n\nif True, will be rendered as an editable textbox; if False, editing will be\ndisabled. If not provided, this is inferred based on whether the component is\nused as an input or output.\n\n\n \n \n visible: bool | Literal['hidden']\n\ndefault `= True`\n\nIf False, component will be hidden. If \"hidden\", component will be visually\nhidden and not take up space in the layout but still exist in the DOM\n\n\n \n \n elem_id: str | None\n\ndefault `= None`\n\nAn optional string that is assigned as the id of this component in the HTML\nDOM. Can be used for targeting CSS styles.\n\n\n \n \n autofocus: bool\n\ndefault `= False`\n\nIf True, will focus on the textbox when the page loads. Use this carefully, as\nit can cause usability issues for sighted and non-sighted users.\n\n\n \n \n autoscroll: bool\n\ndefault `= True`\n\nIf True, will automatically scroll to the bottom of the textbox when the value\nchanges, unless the user scrolls up. If False, will not scroll to the bottom\nof the textbox when the value changes.\n\n\n \n \n elem_classes: list[str] | str | None\n\ndefault `= None`\n\nAn optional list of strings that are assigned as the classes of this component\nin the HTML DOM. Can be used for targeting CSS styles.\n\n\n \n \n render: bool\n\ndefault `= True`\n\nIf False, component will not render be rendered in the Blocks context. Should\nbe used if the intention is to assign event listeners now but render the\ncomponent later.\n\n\n \n \n key: int | str | None\n\ndefault `= None`\n\nif assigned, will be used to assume identity across a re-render. Components\nthat have the same key across a re-render will have their value preserved.\n\n\n \n \n max_lines: int | None\n\ndefault `= None`\n\nmaximum number of lines allo", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "identity across a re-render. Components\nthat have the same key across a re-render will have their value preserved.\n\n\n \n \n max_lines: int | None\n\ndefault `= None`\n\nmaximum number of lines allowed in the dialogue.\n\n\n \n \n buttons: list[Literal['copy'] | Button] | None\n\ndefault `= None`\n\nA list of buttons to show for the component. Valid options are \"copy\" or a\ngr.Button() instance. The \"copy\" button allows the user to copy the text in\nthe textbox. Custom gr.Button() instances will appear in the toolbar with\ntheir configured icon and/or label, and clicking them will trigger any\n.click() events registered on the button. By default, no buttons are shown.\n\n\n \n \n submit_btn: str | bool | None\n\ndefault `= False`\n\nIf False, will not show a submit button. If True, will show a submit button\nwith an icon. If a string, will use that string as the submit button text.\n\n\n \n \n ui_mode: Literal['dialogue', 'text', 'both']\n\ndefault `= \"both\"`\n\nDetermines the user interface mode of the component. Can be \"dialogue\"\n(displays dialogue lines), \"text\" (displays a single text input), or \"both\"\n(displays both dialogue lines and a text input). Defaults to \"both\".\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "Class| Interface String Shortcut| Initialization \n---|---|--- \n`gradio.Dialogue`| \"dialogue\"| Uses default values \n \n", "heading1": "Shortcuts", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "dia_dialogue_demo\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Dialogue component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener| Description \n---|--- \n`Dialogue.change(fn, \u00b7\u00b7\u00b7)`| Triggered when the value of the Dialogue changes\neither because of user input (e.g. a user types in a textbox) OR because of a\nfunction update (e.g. an image receives a value from the output of an event\ntrigger). See `.input()` for a listener that is only triggered by user input. \n`Dialogue.input(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user changes\nthe value of the Dialogue. \n`Dialogue.submit(fn, \u00b7\u00b7\u00b7)`| This listener is triggered when the user presses\nthe Enter key while the Dialogue is focused. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n fn: Callable | None | Literal['decorator']\n\ndefault `= \"decorator\"`\n\nthe function to call when this event is triggered. Often a machine learning\nmodel's prediction function. Each parameter of the function corresponds to one\ninput component, and the function should return a single value or a tuple of\nvalues, with each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as inputs. If the function takes no inputs,\nthis should be an empty list.\n\n\n \n \n outputs: Component | BlockContext | list[Component | BlockContext] | Set[Component | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndef", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "onent | BlockContext] | None\n\ndefault `= None`\n\nList of gradio.components to use as outputs. If the function returns no\noutputs, this should be an empty list.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the endpoint appears in the API docs. Can be a string or None. If\nset to a string, the endpoint will be exposed in the API docs with the given\nname. If None (default), the name of the function will be used as the API\nendpoint.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n scroll_to_output: bool\n\ndefault `= False`\n\nIf True, will scroll to output component on completion\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n show_progress_on: Component | list[Component] | None\n\ndefault `= None`\n\nComponent or list of components to show the progress animation on. If None,\nwill show the progress animation on all of the output components.\n\n\n \n \n queue: bool\n\ndefault `= True`\n\nIf True, will place the request on the queue, if the queue has been enabled.\nIf False, will not put this event on the queue, even if the queue has been\nenabled. If None, will use the queue setting of the gradio app.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\no", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": " \n \n batch: bool\n\ndefault `= False`\n\nIf True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nMaximum number of inputs to batch together if this is called from the queue\n(only relevant if batch=True)\n\n\n \n \n preprocess: bool\n\ndefault `= True`\n\nIf False, will not run preprocessing of component data before running 'fn'\n(e.g. leaving it as a base64 string if this method is called with the `Image`\ncomponent).\n\n\n \n \n postprocess: bool\n\ndefault `= True`\n\nIf False, will not run postprocessing of component data before returning 'fn'\noutput to the browser.\n\n\n \n \n cancels: dict[str, Any] | list[dict[str, Any]] | None\n\ndefault `= None`\n\nA list of other events to cancel when this listener is triggered. For example,\nsetting cancels=[click_event] will cancel the click_event, where click_event\nis the return value of another components .click method. Functions that have\nnot yet run (or generators that are iterating) will be cancelled, but\nfunctions that are currently running will be allowed to finish.\n\n\n \n \n trigger_mode: Literal['once', 'multiple', 'always_last'] | None\n\ndefault `= None`\n\nIf \"once\" (default for all events except `.change()`) would not allow any\nsubmissions while an event is pending. If set to \"multiple\", unlimited\nsubmissions are allowed while pending, and \"always_last\" (default for\n`.change()` and `.key_up()` events) would allow a second submission after the\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inpu", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "e\npending event is complete.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nOptional frontend js method to run before running 'fn'. Input arguments for js\nmethod are values of 'inputs' and 'outputs', return should be a list of values\nfor output components.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nIf set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`Blocks.queue()`, which itself is 1 by default).\n\n\n \n \n concurrency_id: str | None\n\ndefault `= None`\n\nIf set, this is the id of the concurrency group. Events with the same\nconcurrency_id will be limited by the lowest set concurrency_limit.\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\ncontrols the visibility and accessibility of this endpoint. Can be \"public\"\n(shown in API docs and callable by clients), \"private\" (hidden from API docs\nand not callable by clients), or \"undocumented\" (hidden from API docs but\ncallable by clients and via gr.load). If fn is None, api_visibility will\nautomatically be set to \"private\".\n\n\n \n \n time_limit: int | None\n\ndefault `= None`\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\n\n \n \n key: int | str | tuple[int | str, ...] | None\n\ndefault `= None`\n\nA unique key for this event listener to be used in @gr.render(). If set, this\nvalue identifies an event as identical across re-renders when the key is\nidentical.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\nOptional validation function to run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the sam", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "run before the main function. If provided,\nthis function will be executed first with queue=False, and only if it\ncompletes successfully will the main function be called. The validator\nreceives the same inputs as the main function and should return a\n`gr.validate()` for each input value.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/gradio/dialogue", "source_page_title": "Gradio - Dialogue Docs"}, {"text": "Interface is Gradio's main high-level class, and allows you to create a\nweb-based GUI / demo around a machine learning model (or any Python function)\nin a few lines of code. You must specify three parameters: (1) the function to\ncreate a GUI for (2) the desired input components and (3) the desired output\ncomponents. Additional parameters can be used to control the appearance and\nbehavior of the demo. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "import gradio as gr\n \n def image_classifier(inp):\n return {'cat': 0.3, 'dog': 0.7}\n \n demo = gr.Interface(fn=image_classifier, inputs=\"image\", outputs=\"label\")\n demo.launch()\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n fn: Callable\n\nthe function to wrap an interface around. Often a machine learning model's\nprediction function. Each parameter of the function corresponds to one input\ncomponent, and the function should return a single value or a tuple of values,\nwith each element in the tuple corresponding to one output component.\n\n\n \n \n inputs: str | Component | list[str | Component] | None\n\na single Gradio component, or list of Gradio components. Components can either\nbe passed as instantiated objects, or referred to by their string shortcuts.\nThe number of input components should match the number of parameters in fn. If\nset to None, then only the output components will be displayed.\n\n\n \n \n outputs: str | Component | list[str | Component] | None\n\na single Gradio component, or list of Gradio components. Components can either\nbe passed as instantiated objects, or referred to by their string shortcuts.\nThe number of output components should match the number of values returned by\nfn. If set to None, then only the input components will be displayed.\n\n\n \n \n examples: list[Any] | list[list[Any]] | str | None\n\ndefault `= None`\n\nsample inputs for the function; if provided, appear below the UI components\nand can be clicked to populate the interface. Should be nested list, in which\nthe outer list consists of samples and each inner list consists of an input\ncorresponding to each input component. A string path to a directory of\nexamples can also be provided, but it should be within the directory with the\npython file running the gradio app. If there are multiple input components and\na directory is provided, a log.csv file must be present in the directory to\nlink corresponding inputs.\n\n\n \n \n cache_examples: bool | None\n\ndefault `= None`\n\nIf True, caches examples in the server for fast runtime in examples. If\n\"lazy\", then examples are cached (for all users of the app) after their first\nuse (by any user of the app). If None, will use ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "\n\nIf True, caches examples in the server for fast runtime in examples. If\n\"lazy\", then examples are cached (for all users of the app) after their first\nuse (by any user of the app). If None, will use the GRADIO_CACHE_EXAMPLES\nenvironment variable, which should be either \"true\" or \"false\". In HuggingFace\nSpaces, this parameter defaults to True (as long as `fn` and `outputs` are\nalso provided). Note that examples are cached separately from Gradio's queue()\nso certain features, such as gr.Progress(), gr.Info(), gr.Warning(), etc. will\nnot be displayed in Gradio's UI for cached examples.\n\n\n \n \n cache_mode: Literal['eager', 'lazy'] | None\n\ndefault `= None`\n\nif \"lazy\", examples are cached after their first use. If \"eager\", all examples\nare cached at app launch. If None, will use the GRADIO_CACHE_MODE environment\nvariable if defined, or default to \"eager\". In HuggingFace Spaces, this\nparameter defaults to \"eager\" except for ZeroGPU Spaces, in which case it\ndefaults to \"lazy\".\n\n\n \n \n examples_per_page: int\n\ndefault `= 10`\n\nif examples are provided, how many to display per page.\n\n\n \n \n example_labels: list[str] | None\n\ndefault `= None`\n\na list of labels for each example. If provided, the length of this list should\nbe the same as the number of examples, and these labels will be used in the UI\ninstead of rendering the example values.\n\n\n \n \n preload_example: int | Literal[False]\n\ndefault `= 0`\n\nIf an integer is provided (and examples are being cached eagerly and none of\nthe input components have a developer-provided `value`), the example at that\nindex in the examples list will be preloaded when the Gradio app is first\nloaded. If False, no example will be preloaded.\n\n\n \n \n live: bool\n\ndefault `= False`\n\nwhether the interface should automatically rerun if any of the inputs change.\n\n\n \n \n title: str | I18nData | None\n\ndefault `= None`\n\na title for the interface; if provided, appears above the input and output\ncomponents in larg", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "tically rerun if any of the inputs change.\n\n\n \n \n title: str | I18nData | None\n\ndefault `= None`\n\na title for the interface; if provided, appears above the input and output\ncomponents in large font. Also used as the tab title when opened in a browser\nwindow.\n\n\n \n \n description: str | None\n\ndefault `= None`\n\na description for the interface; if provided, appears above the input and\noutput components and beneath the title in regular font. Accepts Markdown and\nHTML content.\n\n\n \n \n article: str | None\n\ndefault `= None`\n\nan expanded article explaining the interface; if provided, appears below the\ninput and output components in regular font. Accepts Markdown and HTML\ncontent. If it is an HTTP(S) link to a downloadable remote file, the content\nof this file is displayed.\n\n\n \n \n flagging_mode: Literal['never'] | Literal['auto'] | Literal['manual'] | None\n\ndefault `= None`\n\none of \"never\", \"auto\", or \"manual\". If \"never\" or \"auto\", users will not see\na button to flag an input and output. If \"manual\", users will see a button to\nflag. If \"auto\", every input the user submits will be automatically flagged,\nalong with the generated output. If \"manual\", both the input and outputs are\nflagged when the user clicks flag button. This parameter can be set with\nenvironmental variable GRADIO_FLAGGING_MODE; otherwise defaults to \"manual\".\n\n\n \n \n flagging_options: list[str] | list[tuple[str, str]] | None\n\ndefault `= None`\n\nif provided, allows user to select from the list of options when flagging.\nOnly applies if flagging_mode is \"manual\". Can either be a list of tuples of\nthe form (label, value), where label is the string that will be displayed on\nthe button and value is the string that will be stored in the flagging CSV; or\nit can be a list of strings [\"X\", \"Y\"], in which case the values will be the\nlist of strings and the labels will [\"Flag as X\", \"Flag as Y\"], etc.\n\n\n \n \n flagging_dir: str\n\ndefault `= \".gradio/flagged\"`\n\npath to the di", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "s [\"X\", \"Y\"], in which case the values will be the\nlist of strings and the labels will [\"Flag as X\", \"Flag as Y\"], etc.\n\n\n \n \n flagging_dir: str\n\ndefault `= \".gradio/flagged\"`\n\npath to the directory where flagged data is stored. If the directory does not\nexist, it will be created.\n\n\n \n \n flagging_callback: FlaggingCallback | None\n\ndefault `= None`\n\neither None or an instance of a subclass of FlaggingCallback which will be\ncalled when a sample is flagged. If set to None, an instance of\ngradio.flagging.CSVLogger will be created and logs will be saved to a local\nCSV file in flagging_dir. Default to None.\n\n\n \n \n analytics_enabled: bool | None\n\ndefault `= None`\n\nwhether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED\nenvironment variable if defined, or default to True.\n\n\n \n \n batch: bool\n\ndefault `= False`\n\nif True, then the function should process a batch of inputs, meaning that it\nshould accept a list of input values for each parameter. The lists should be\nof equal length (and be up to length `max_batch_size`). The function is then\n*required* to return a tuple of lists (even if there is only 1 output\ncomponent), with each list in the tuple corresponding to one output component.\n\n\n \n \n max_batch_size: int\n\ndefault `= 4`\n\nthe maximum number of inputs to batch together if this is called from the\nqueue (only relevant if batch=True)\n\n\n \n \n api_visibility: Literal['public', 'private', 'undocumented']\n\ndefault `= \"public\"`\n\nControls the visibility of the prediction endpoint. Can be \"public\" (shown in\nAPI docs and callable), \"private\" (hidden from API docs and not callable), or\n\"undocumented\" (hidden from API docs but callable).\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\ndefines how the prediction endpoint appears in the API docs. Can be a string\nor None. If set to a string, the endpoint will be exposed in the API docs with\nthe given name. If None, the name of the function will be used.\n\n\n ", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "ion endpoint appears in the API docs. Can be a string\nor None. If set to a string, the endpoint will be exposed in the API docs with\nthe given name. If None, the name of the function will be used.\n\n\n \n \n api_description: str | None | Literal[False]\n\ndefault `= None`\n\nDescription of the API endpoint. Can be a string, None, or False. If set to a\nstring, the endpoint will be exposed in the API docs with the given\ndescription. If None, the function's docstring will be used as the API\nendpoint description. If False, then no description will be displayed in the\nAPI docs.\n\n\n \n \n allow_duplication: bool\n\ndefault `= False`\n\nif True, then will show a 'Duplicate Spaces' button on Hugging Face Spaces.\n\n\n \n \n concurrency_limit: int | None | Literal['default']\n\ndefault `= \"default\"`\n\nif set, this is the maximum number of this event that can be running\nsimultaneously. Can be set to None to mean no concurrency_limit (any number of\nthis event can be running simultaneously). Set to \"default\" to use the default\nconcurrency limit (defined by the `default_concurrency_limit` parameter in\n`.queue()`, which itself is 1 by default).\n\n\n \n \n additional_inputs: str | Component | list[str | Component] | None\n\ndefault `= None`\n\na single Gradio component, or list of Gradio components. Components can either\nbe passed as instantiated objects, or referred to by their string shortcuts.\nThese components will be rendered in an accordion below the main input\ncomponents. By default, no additional input components will be displayed.\n\n\n \n \n additional_inputs_accordion: str | Accordion | None\n\ndefault `= None`\n\nif a string is provided, this is the label of the `gr.Accordion` to use to\ncontain additional inputs. A `gr.Accordion` object can be provided as well to\nconfigure other properties of the container holding the additional inputs.\nDefaults to a `gr.Accordion(label=\"Additional Inputs\", open=False)`. This\nparameter is only used if `additional_inputs` is provide", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "gure other properties of the container holding the additional inputs.\nDefaults to a `gr.Accordion(label=\"Additional Inputs\", open=False)`. This\nparameter is only used if `additional_inputs` is provided.\n\n\n \n \n submit_btn: str | Button\n\ndefault `= \"Submit\"`\n\nthe button to use for submitting inputs. Defaults to a `gr.Button(\"Submit\",\nvariant=\"primary\")`. This parameter does not apply if the Interface is output-\nonly, in which case the submit button always displays \"Generate\". Can be set\nto a string (which becomes the button label) or a `gr.Button` object (which\nallows for more customization).\n\n\n \n \n stop_btn: str | Button\n\ndefault `= \"Stop\"`\n\nthe button to use for stopping the interface. Defaults to a `gr.Button(\"Stop\",\nvariant=\"stop\", visible=False)`. Can be set to a string (which becomes the\nbutton label) or a `gr.Button` object (which allows for more customization).\n\n\n \n \n clear_btn: str | Button | None\n\ndefault `= \"Clear\"`\n\nthe button to use for clearing the inputs. Defaults to a `gr.Button(\"Clear\",\nvariant=\"secondary\")`. Can be set to a string (which becomes the button label)\nor a `gr.Button` object (which allows for more customization). Can be set to\nNone, which hides the button.\n\n\n \n \n delete_cache: tuple[int, int] | None\n\ndefault `= None`\n\na tuple corresponding [frequency, age] both expressed in number of seconds.\nEvery `frequency` seconds, the temporary files created by this Blocks instance\nwill be deleted if more than `age` seconds have passed since the file was\ncreated. For example, setting this to (86400, 86400) will delete temporary\nfiles every day. The cache will be deleted entirely when the server restarts.\nIf None, no cache deletion will occur.\n\n\n \n \n show_progress: Literal['full', 'minimal', 'hidden']\n\ndefault `= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" o", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "`= \"full\"`\n\nhow to show the progress animation while event is running: \"full\" shows a\nspinner which covers the output component area as well as a runtime display in\nthe upper right corner, \"minimal\" only shows the runtime display, \"hidden\"\nshows no progress animation at all\n\n\n \n \n fill_width: bool\n\ndefault `= False`\n\nwhether to horizontally expand to fill container fully. If False, centers and\nconstrains app to a maximum width.\n\n\n \n \n time_limit: int | None\n\ndefault `= 30`\n\nThe time limit for the stream to run. Default is 30 seconds. Parameter only\nused for streaming images or audio if the interface is live and the input\ncomponents are set to \"streaming=True\".\n\n\n \n \n stream_every: float\n\ndefault `= 0.5`\n\nThe latency (in seconds) at which stream chunks are sent to the backend.\nDefaults to 0.5 seconds. Parameter only used for streaming images or audio if\nthe interface is live and the input components are set to \"streaming=True\".\n\n\n \n \n deep_link: str | DeepLinkButton | bool | None\n\ndefault `= None`\n\na string or `gr.DeepLinkButton` object that creates a unique URL you can use\nto share your app and all components **as they currently are** with others.\nAutomatically enabled on Hugging Face Spaces unless explicitly set to False.\n\n\n \n \n validator: Callable | None\n\ndefault `= None`\n\na function that takes in the inputs and can optionally return a gr.validate()\nobject for each input.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "hello_worldhello_world_2hello_world_3\n\n", "heading1": "Demos", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "", "heading1": "Methods", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\n \n \n gradio.Interface.launch(\u00b7\u00b7\u00b7)\n\nDescription\n![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.", "heading1": "launch", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\nLaunches a simple web server that serves the demo. Can also be used to create\na public link used by anyone to access the demo from their browser by setting\nshare=True.\n\nExample Usage\n![](data:image/svg+xml,%3csvg%20xmlns='http://www.w3.org/2000/svg'%20fill='%23808080'%20viewBox='0%200%20640%20512'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,", "heading1": "launch", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "2'%3e%3c!--!%20Font%20Awesome%20Pro%206.0.0%20by%20@fontawesome%20-%20https://fontawesome.com%20License%20-%20https://fontawesome.com/license%20\\(Commercial%20License\\)%20Copyright%202022%20Fonticons,%20Inc.%20--%3e%3cpath%20d='M172.5%20131.1C228.1%2075.51%20320.5%2075.51%20376.1%20131.1C426.1%20181.1%20433.5%20260.8%20392.4%20318.3L391.3%20319.9C381%20334.2%20361%20337.6%20346.7%20327.3C332.3%20317%20328.9%20297%20339.2%20282.7L340.3%20281.1C363.2%20249%20359.6%20205.1%20331.7%20177.2C300.3%20145.8%20249.2%20145.8%20217.7%20177.2L105.5%20289.5C73.99%20320.1%2073.99%20372%20105.5%20403.5C133.3%20431.4%20177.3%20435%20209.3%20412.1L210.9%20410.1C225.3%20400.7%20245.3%20404%20255.5%20418.4C265.8%20432.8%20262.5%20452.8%20248.1%20463.1L246.5%20464.2C188.1%20505.3%20110.2%20498.7%2060.21%20448.8C3.741%20392.3%203.741%20300.7%2060.21%20244.3L172.5%20131.1zM467.5%20380C411%20436.5%20319.5%20436.5%20263%20380C213%20330%20206.5%20251.2%20247.6%20193.7L248.7%20192.1C258.1%20177.8%20278.1%20174.4%20293.3%20184.7C307.7%20194.1%20311.1%20214.1%20300.8%20229.3L299.7%20230.9C276.8%20262.1%20280.4%20306.9%20308.3%20334.8C339.7%20366.2%20390.8%20366.2%20422.3%20334.8L534.5%20222.5C566%20191%20566%20139.1%20534.5%20108.5C506.7%2080.63%20462.7%2076.99%20430.7%2099.9L429.1%20101C414.7%20111.3%20394.7%20107.1%20384.5%2093.58C374.2%2079.2%20377.5%2059.21%20391.9%2048.94L393.5%2047.82C451%206.731%20529.8%2013.25%20579.8%2063.24C636.3%20119.7%20636.3%20211.3%20579.8%20267.7L467.5%20380z'/%3e%3c/svg%3e)\n\n \n \n import gradio as gr\n def reverse(text):\n return text[::-1]\n demo = gr.Interface(reverse, \"text\", \"text\")\n demo.launch(share=True, auth=(\"username\", \"password\"))\n\nParameters \u25bc\n\n\n \n \n inline: bool | None\n\ndefault `= None`\n\nwhether to display in the gradio app inline in an iframe. Defaults to True in\npython notebooks; False otherwise.\n\n\n \n \n inbrowser: bool\n\ndefault `= False`\n\nwhether to automatically launch the gradio app in a new tab on the de", "heading1": "launch", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "pp inline in an iframe. Defaults to True in\npython notebooks; False otherwise.\n\n\n \n \n inbrowser: bool\n\ndefault `= False`\n\nwhether to automatically launch the gradio app in a new tab on the default\nbrowser.\n\n\n \n \n share: bool | None\n\ndefault `= None`\n\nwhether to create a publicly shareable link for the gradio app. Creates an SSH\ntunnel to make your UI accessible from anywhere. If not provided, it is set to\nFalse by default every time, except when running in Google Colab. When\nlocalhost is not accessible (e.g. Google Colab), setting share=False is not\nsupported. Can be set by environment variable GRADIO_SHARE=True.\n\n\n \n \n debug: bool\n\ndefault `= False`\n\nif True, blocks the main thread from running. If running in Google Colab, this\nis needed to print the errors in the cell output.\n\n\n \n \n max_threads: int\n\ndefault `= 40`\n\nthe maximum number of total threads that the Gradio app can generate in\nparallel. The default is inherited from the starlette library (currently 40).\n\n\n \n \n auth: Callable[[str, str], bool] | tuple[str, str] | list[tuple[str, str]] | None\n\ndefault `= None`\n\nIf provided, username and password (or list of username-password tuples)\nrequired to access app. Can also provide function that takes username and\npassword and returns True if valid login.\n\n\n \n \n auth_message: str | None\n\ndefault `= None`\n\nIf provided, HTML message provided on login page.\n\n\n \n \n prevent_thread_lock: bool\n\ndefault `= False`\n\nBy default, the gradio app blocks the main thread while the server is running.\nIf set to True, the gradio app will not block and the gradio server will\nterminate as soon as the script finishes.\n\n\n \n \n show_error: bool\n\ndefault `= False`\n\nIf True, any errors in the gradio app will be displayed in an alert modal and\nprinted in the browser console log. They will also be displayed in the alert\nmodal of downstream apps that gr.load() this app.\n\n\n \n \n server_name: str | None\n\ndefault `= No", "heading1": "launch", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "an alert modal and\nprinted in the browser console log. They will also be displayed in the alert\nmodal of downstream apps that gr.load() this app.\n\n\n \n \n server_name: str | None\n\ndefault `= None`\n\nto make app accessible on local network, set this to \"0.0.0.0\". Can be set by\nenvironment variable GRADIO_SERVER_NAME. If None, will use \"127.0.0.1\".\n\n\n \n \n server_port: int | None\n\ndefault `= None`\n\nwill start gradio app on this port (if available). Can be set by environment\nvariable GRADIO_SERVER_PORT. If None, will search for an available port\nstarting at 7860.\n\n\n \n \n height: int\n\ndefault `= 500`\n\nThe height in pixels of the iframe element containing the gradio app (used if\ninline=True)\n\n\n \n \n width: int | str\n\ndefault `= \"100%\"`\n\nThe width in pixels of the iframe element containing the gradio app (used if\ninline=True)\n\n\n \n \n favicon_path: str | Path | None\n\ndefault `= None`\n\nIf a path to a file (.png, .gif, or .ico) is provided, it will be used as the\nfavicon for the web page.\n\n\n \n \n ssl_keyfile: str | None\n\ndefault `= None`\n\nIf a path to a file is provided, will use this as the private key file to\ncreate a local server running on https.\n\n\n \n \n ssl_certfile: str | None\n\ndefault `= None`\n\nIf a path to a file is provided, will use this as the signed certificate for\nhttps. Needs to be provided if ssl_keyfile is provided.\n\n\n \n \n ssl_keyfile_password: str | None\n\ndefault `= None`\n\nIf a password is provided, will use this with the ssl certificate for https.\n\n\n \n \n ssl_verify: bool\n\ndefault `= True`\n\nIf False, skips certificate validation which allows self-signed certificates\nto be used.\n\n\n \n \n quiet: bool\n\ndefault `= False`\n\nIf True, suppresses most print statements.\n\n\n \n \n footer_links: list[Literal['api', 'gradio', 'settings'] | dict[str, str]] | None\n\ndefault `= None`\n\nThe links to display in the footer of the app. Accepts a list, where each\nelement of the list must be one of", "heading1": "launch", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "inks: list[Literal['api', 'gradio', 'settings'] | dict[str, str]] | None\n\ndefault `= None`\n\nThe links to display in the footer of the app. Accepts a list, where each\nelement of the list must be one of \"api\", \"gradio\", or \"settings\"\ncorresponding to the API docs, \"built with Gradio\", and settings pages\nrespectively. If None, all three links will be shown in the footer. An empty\nlist means that no footer is shown.\n\n\n \n \n allowed_paths: list[str] | None\n\ndefault `= None`\n\nList of complete filepaths or parent directories that gradio is allowed to\nserve. Must be absolute paths. Warning: if you provide directories, any files\nin these directories or their subdirectories are accessible to all users of\nyour app. Can be set by comma separated environment variable\nGRADIO_ALLOWED_PATHS. These files are generally assumed to be secure and will\nbe displayed in the browser when possible.\n\n\n \n \n blocked_paths: list[str] | None\n\ndefault `= None`\n\nList of complete filepaths or parent directories that gradio is not allowed to\nserve (i.e. users of your app are not allowed to access). Must be absolute\npaths. Warning: takes precedence over `allowed_paths` and all other\ndirectories exposed by Gradio by default. Can be set by comma separated\nenvironment variable GRADIO_BLOCKED_PATHS.\n\n\n \n \n root_path: str | None\n\ndefault `= None`\n\nThe root path (or \"mount point\") of the application, if it's not served from\nthe root (\"/\") of the domain. Often used when the application is behind a\nreverse proxy that forwards requests to the application. For example, if the\napplication is served at \"https://example.com/myapp\", the `root_path` should\nbe set to \"/myapp\". A full URL beginning with http:// or https:// can be\nprovided, which will be used as the root path in its entirety. Can be set by\nenvironment variable GRADIO_ROOT_PATH. Defaults to \"\".\n\n\n \n \n app_kwargs: dict[str, Any] | None\n\ndefault `= None`\n\nAdditional keyword arguments to pass to the underlying FastAPI app", "heading1": "launch", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "be set by\nenvironment variable GRADIO_ROOT_PATH. Defaults to \"\".\n\n\n \n \n app_kwargs: dict[str, Any] | None\n\ndefault `= None`\n\nAdditional keyword arguments to pass to the underlying FastAPI app as a\ndictionary of parameter keys and argument values. For example, `{\"docs_url\":\n\"/docs\"}`\n\n\n \n \n state_session_capacity: int\n\ndefault `= 10000`\n\nThe maximum number of sessions whose information to store in memory. If the\nnumber of sessions exceeds this number, the oldest sessions will be removed.\nReduce capacity to reduce memory usage when using gradio.State or returning\nupdated components from functions. Defaults to 10000.\n\n\n \n \n share_server_address: str | None\n\ndefault `= None`\n\nUse this to specify a custom FRP server and port for sharing Gradio apps (only\napplies if share=True). If not provided, will use the default FRP server at\nhttps://gradio.live. See https://github.com/huggingface/frp for more\ninformation.\n\n\n \n \n share_server_protocol: Literal['http', 'https'] | None\n\ndefault `= None`\n\nUse this to specify the protocol to use for the share links. Defaults to\n\"https\", unless a custom share_server_address is provided, in which case it\ndefaults to \"http\". If you are using a custom share_server_address and want to\nuse https, you must set this to \"https\".\n\n\n \n \n share_server_tls_certificate: str | None\n\ndefault `= None`\n\nThe path to a TLS certificate file to use when connecting to a custom share\nserver. This parameter is not used with the default FRP server at\nhttps://gradio.live. Otherwise, you must provide a valid TLS certificate file\n(e.g. a \"cert.pem\") relative to the current working directory, or the\nconnection will not use TLS encryption, which is insecure.\n\n\n \n \n auth_dependency: Callable[[fastapi.Request], str | None] | None\n\ndefault `= None`\n\nA function that takes a FastAPI request and returns a string user ID or None.\nIf the function returns None for a specific request, that user is not\nauthorized to access the", "heading1": "launch", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "] | None\n\ndefault `= None`\n\nA function that takes a FastAPI request and returns a string user ID or None.\nIf the function returns None for a specific request, that user is not\nauthorized to access the app (they will see a 401 Unauthorized response). To\nbe used with external authentication systems like OAuth. Cannot be used with\n`auth`.\n\n\n \n \n max_file_size: str | int | None\n\ndefault `= None`\n\nThe maximum file size in bytes that can be uploaded. Can be a string of the\nform \"\", where value is any positive integer and unit is one of\n\"b\", \"kb\", \"mb\", \"gb\", \"tb\". If None, no limit is set.\n\n\n \n \n enable_monitoring: bool | None\n\ndefault `= None`\n\nEnables traffic monitoring of the app through the /monitoring endpoint. By\ndefault is None, which enables this endpoint. If explicitly True, will also\nprint the monitoring URL to the console. If False, will disable monitoring\naltogether.\n\n\n \n \n strict_cors: bool\n\ndefault `= True`\n\nIf True, prevents external domains from making requests to a Gradio server\nrunning on localhost. If False, allows requests to localhost that originate\nfrom localhost but also, crucially, from \"null\". This parameter should\nnormally be True to prevent CSRF attacks but may need to be False when\nembedding a *locally-running Gradio app* using web components.\n\n\n \n \n node_server_name: str | None\n\ndefault `= None`\n\n\n \n \n node_port: int | None\n\ndefault `= None`\n\n\n \n \n ssr_mode: bool | None\n\ndefault `= None`\n\nIf True, the Gradio app will be rendered using server-side rendering mode,\nwhich is typically more performant and provides better SEO, but this requires\nNode 20+ to be installed on the system. If False, the app will be rendered\nusing client-side rendering mode. If None, will use GRADIO_SSR_MODE\nenvironment variable or default to False.\n\n\n \n \n pwa: bool | None\n\ndefault `= None`\n\nIf True, the Gradio app will be set up as an installable PWA (Progressive Web\nApp). If set to None (default ", "heading1": "launch", "source_page_url": "https://gradio.app/docs/gradio/interface", "source_page_title": "Gradio - Interface Docs"}, {"text": "vironment variable or default to False.\n\n\n \n \n pwa: bool | None\n\ndefault `= None`\n\nIf True, the Gradio app will be set up as an installable PWA (Progressive Web\nApp). If set to None (default behavior), then the PWA feature will be enabled\nif this Gradio app is launched on Spaces, but not otherwise.\n\n\n \n \n mcp_server: bool | None\n\ndefault `= None`\n\nIf True, the Gradio app will be set up as an MCP server and documented\nfunctions will be added as MCP tools. If None (default behavior), then the\nGRADIO_MCP_SERVER environment variable will be used to determine if the MCP\nserver should be enabled.\n\n\n \n \n i18n: I18n | None\n\ndefault `= None`\n\nAn I18n instance containing custom translations, which are used to translate\nstrings in our components (e.g. the labels of components or Markdown strings).\nThis feature can only be used to translate static text in the frontend, not\nvalues in the backend.\n\n\n \n \n theme: Theme | str | None\n\ndefault `= None`\n\nA Theme object or a string representing a theme. If a string, will look for a\nbuilt-in theme with that name (e.g. \"soft\" or \"default\"), or will attempt to\nload a theme from the Hugging Face Hub (e.g. \"gradio/monochrome\"). If None,\nwill use the Default theme.\n\n\n \n \n css: str | None\n\ndefault `= None`\n\nCustom css as a code string. This css will be included in the demo webpage.\n\n\n \n \n css_paths: str | Path | list[str | Path] | None\n\ndefault `= None`\n\nCustom css as a pathlib.Path to a css file or a list of such paths. This css\nfiles will be read, concatenated, and included in the demo webpage. If the\n`css` parameter is also set, the css from `css` will be included first.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nCustom js as a code string. The js code will automatically be executed when\nthe page loads. For more flexibility, use the head parameter to insert js\ninside \n```\n\n3. That's it!\n\nYour website now has a chat widget that connects to your Gradio app! Users can click the chat button to open the widget and start interacting with your app.\n\nCustomization\n\nYou can customize the appearance of the widget by modifying the CSS. Some ideas:\n- Change the colors to match your website's theme\n- Adjust the size and position of the widget\n- Add animations for opening/closing\n- Modify the message styling\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif)\n\nIf you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are hap", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot", "source_page_title": "Chatbots - Creating A Website Widget From A Gradio Chatbot Guide"}, {"text": "%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif)\n\nIf you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot", "source_page_title": "Chatbots - Creating A Website Widget From A Gradio Chatbot Guide"}, {"text": "Chatbots are a popular application of large language models (LLMs). Using Gradio, you can easily build a chat application and share that with your users, or try it yourself using an intuitive UI.\n\nThis tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that allows you to create your chatbot UI fast, often with a _few lines of Python_. It can be easily adapted to support multimodal chatbots, or chatbots that require further customization.\n\n**Prerequisites**: please make sure you are using the latest version of Gradio:\n\n```bash\n$ pip install --upgrade gradio\n```\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "If you have a chat server serving an OpenAI-API compatible endpoint (such as Ollama), you can spin up a ChatInterface in a single line of Python. First, also run `pip install openai`. Then, with your own URL, model, and optional token:\n\n```python\nimport gradio as gr\n\ngr.load_chat(\"http://localhost:11434/v1/\", model=\"llama3.2\", token=\"***\").launch()\n```\n\nRead about `gr.load_chat` in [the docs](https://www.gradio.app/docs/gradio/load_chat). If you have your own model, keep reading to see how to create an application around any chat model in Python!\n\n", "heading1": "Note for OpenAI-API compatible endpoints", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "To create a chat application with `gr.ChatInterface()`, the first thing you should do is define your **chat function**. In the simplest case, your chat function should accept two arguments: `message` and `history` (the arguments can be named anything, but must be in this order).\n\n- `message`: a `str` representing the user's most recent message.\n- `history`: a list of openai-style dictionaries with `role` and `content` keys, representing the previous conversation history. May also include additional keys representing message metadata.\n\nThe `history` would look like this:\n\n```python\n[\n {\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"What is the capital of France?\"}]},\n {\"role\": \"assistant\", \"content\": [{\"type\": \"text\", \"text\": \"Paris\"}]}\n]\n```\n\nwhile the next `message` would be:\n\n```py\n\"And what is its largest city?\"\n```\n\nYour chat function simply needs to return: \n\n* a `str` value, which is the chatbot's response based on the chat `history` and most recent `message`, for example, in this case:\n\n```\nParis is also the largest city.\n```\n\nLet's take a look at a few example chat functions:\n\n**Example: a chatbot that randomly responds with yes or no**\n\nLet's write a chat function that responds `Yes` or `No` randomly.\n\nHere's our chat function:\n\n```python\nimport random\n\ndef random_response(message, history):\n return random.choice([\"Yes\", \"No\"])\n```\n\nNow, we can plug this into `gr.ChatInterface()` and call the `.launch()` method to create the web interface:\n\n```python\nimport gradio as gr\n\ngr.ChatInterface(\n fn=random_response, \n).launch()\n```\n\nThat's it! Here's our running demo, try it out:\n\n$demo_chatinterface_random_response\n\n**Example: a chatbot that alternates between agreeing and disagreeing**\n\nOf course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.\n\n```python\nimport gradio as gr\n\ndef alternatingl", "heading1": "Defining a chat function", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "t take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.\n\n```python\nimport gradio as gr\n\ndef alternatingly_agree(message, history):\n if len([h for h in history if h['role'] == \"assistant\"]) % 2 == 0:\n return f\"Yes, I do think that: {message}\"\n else:\n return \"I don't think so\"\n\ngr.ChatInterface(\n fn=alternatingly_agree, \n).launch()\n```\n\nWe'll look at more realistic examples of chat functions in our next Guide, which shows [examples of using `gr.ChatInterface` with popular LLMs](../guides/chatinterface-examples). \n\n", "heading1": "Defining a chat function", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "In your chat function, you can use `yield` to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple!\n\n```python\nimport time\nimport gradio as gr\n\ndef slow_echo(message, history):\n for i in range(len(message)):\n time.sleep(0.3)\n yield \"You typed: \" + message[: i+1]\n\ngr.ChatInterface(\n fn=slow_echo, \n).launch()\n```\n\nWhile the response is streaming, the \"Submit\" button turns into a \"Stop\" button that can be used to stop the generator function.\n\nTip: Even though you are yielding the latest message at each iteration, Gradio only sends the \"diff\" of each message from the server to the frontend, which reduces latency and data consumption over your network.\n\n", "heading1": "Streaming chatbots", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "If you're familiar with Gradio's `gr.Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can:\n\n- add a title and description above your chatbot using `title` and `description` arguments.\n- add a theme or custom css using `theme` and `css` arguments respectively in the `launch()` method.\n- add `examples` and even enable `cache_examples`, which make your Chatbot easier for users to try it out.\n- customize the chatbot (e.g. to change the height or add a placeholder) or textbox (e.g. to add a max number of characters or add a placeholder).\n\n**Adding examples**\n\nYou can add preset examples to your `gr.ChatInterface` with the `examples` parameter, which takes a list of string examples. Any examples will appear as \"buttons\" within the Chatbot before any messages are sent. If you'd like to include images or other files as part of your examples, you can do so by using this dictionary format for each example instead of a string: `{\"text\": \"What's in this image?\", \"files\": [\"cheetah.jpg\"]}`. Each file will be a separate message that is added to your Chatbot history.\n\nYou can change the displayed text for each example by using the `example_labels` argument. You can add icons to each example as well using the `example_icons` argument. Both of these arguments take a list of strings, which should be the same length as the `examples` list.\n\nIf you'd like to cache the examples so that they are pre-computed and the results appear instantly, set `cache_examples=True`.\n\n**Customizing the chatbot or textbox component**\n\nIf you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox components. Here's an example of how we to apply the parameters we've discussed in this section:\n\n```python\nimport gradio as gr\n\ndef yes_man(message, history):\n if message.endswith(\"?\"):\n return \"Yes\"\n else:\n ", "heading1": "Customizing the Chat UI", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "le of how we to apply the parameters we've discussed in this section:\n\n```python\nimport gradio as gr\n\ndef yes_man(message, history):\n if message.endswith(\"?\"):\n return \"Yes\"\n else:\n return \"Ask me anything!\"\n\ngr.ChatInterface(\n yes_man,\n chatbot=gr.Chatbot(height=300),\n textbox=gr.Textbox(placeholder=\"Ask me a yes or no question\", container=False, scale=7),\n title=\"Yes Man\",\n description=\"Ask Yes Man any question\",\n examples=[\"Hello\", \"Am I cool?\", \"Are tomatoes vegetables?\"],\n cache_examples=True,\n).launch(theme=\"ocean\")\n```\n\nHere's another example that adds a \"placeholder\" for your chat interface, which appears before the user has started chatting. The `placeholder` argument of `gr.Chatbot` accepts Markdown or HTML:\n\n```python\ngr.ChatInterface(\n yes_man,\n chatbot=gr.Chatbot(placeholder=\"Your Personal Yes-Man
Ask Me Anything\"),\n...\n```\n\nThe placeholder appears vertically and horizontally centered in the chatbot.\n\n", "heading1": "Customizing the Chat UI", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may want to add multimodal capabilities to your chat interface. For example, you may want users to be able to upload images or files to your chatbot and ask questions about them. You can make your chatbot \"multimodal\" by passing in a single parameter (`multimodal=True`) to the `gr.ChatInterface` class.\n\nWhen `multimodal=True`, the signature of your chat function changes slightly: the first parameter of your function (what we referred to as `message` above) should accept a dictionary consisting of the submitted text and uploaded files that looks like this: \n\n```py\n{\n \"text\": \"user input\", \n \"files\": [\n \"updated_file_1_path.ext\",\n \"updated_file_2_path.ext\", \n ...\n ]\n}\n```\n\nThis second parameter of your chat function, `history`, will be in the same openai-style dictionary format as before. However, if the history contains uploaded files, the `content` key will be a dictionary with a \"type\" key whose value is \"file\" and the file will be represented as a dictionary. All the files will be grouped in message in the history. So after uploading two files and asking a question, your history might look like this:\n\n```python\n[\n {\"role\": \"user\", \"content\": [{\"type\": \"file\", \"file\": {\"path\": \"cat1.png\"}},\n {\"type\": \"file\", \"file\": {\"path\": \"cat1.png\"}},\n {\"type\": \"text\", \"text\": \"What's the difference between these two images?\"}]}\n]\n```\n\nThe return type of your chat function does *not change* when setting `multimodal=True` (i.e. in the simplest case, you should still return a string value). We discuss more complex cases, e.g. returning files [below](returning-complex-responses).\n\nIf you are customizing a multimodal chat interface, you should pass in an instance of `gr.MultimodalTextbox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to", "heading1": "Multimodal Chat Interface", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface:\n \n\n```python\nimport gradio as gr\n\ndef count_images(message, history):\n num_images = len(message[\"files\"])\n total_images = 0\n for message in history:\n for content in message[\"content\"]:\n if content[\"type\"] == \"file\":\n total_images += 1\n return f\"You just uploaded {num_images} images, total uploaded: {total_images+num_images}\"\n\ndemo = gr.ChatInterface(\n fn=count_images, \n examples=[\n {\"text\": \"No files\", \"files\": []}\n ], \n multimodal=True,\n textbox=gr.MultimodalTextbox(file_count=\"multiple\", file_types=[\"image\"], sources=[\"upload\", \"microphone\"])\n)\n\ndemo.launch()\n```\n\n", "heading1": "Multimodal Chat Interface", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may want to add additional inputs to your chat function and expose them to your users through the chat UI. For example, you could add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `gr.ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components.\n\nThe `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `\"textbox\"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot within a `gr.Accordion()`. \n\nHere's a complete example:\n\n$code_chatinterface_system_prompt\n\nIf the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath.\n\n```python\nimport gradio as gr\nimport time\n\ndef echo(message, history, system_prompt, tokens):\n response = f\"System prompt: {system_prompt}\\n Message: {message}.\"\n for i in range(min(len(response), int(tokens))):\n time.sleep(0.05)\n yield response[: i+1]\n\nwith gr.Blocks() as demo:\n system_prompt = gr.Textbox(\"You are helpful AI.\", label=\"System Prompt\")\n slider = gr.Slider(10, 100, render=False)\n\n gr.ChatInterface(\n echo, additional_inputs=[system_prompt, slider],\n )\n\ndemo.launch()\n```\n\n**Examples with additional inputs**\n\nYou can also add example values for your additional inputs. Pass in a list of lists to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example v", "heading1": "Additional Inputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "s to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example value for the chat message, and each subsequent element should be an example value for one of the additional inputs, in order. When additional inputs are provided, examples are rendered in a table underneath the chat interface.\n\nIf you need to create something even more custom, then its best to construct the chatbot UI using the low-level `gr.Blocks()` API. We have [a dedicated guide for that here](/guides/creating-a-custom-chatbot-with-blocks).\n\n", "heading1": "Additional Inputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "In the same way that you can accept additional inputs into your chat function, you can also return additional outputs. Simply pass in a list of components to the `additional_outputs` parameter in `gr.ChatInterface` and return additional values for each component from your chat function. Here's an example that extracts code and outputs it into a separate `gr.Code` component:\n\n$code_chatinterface_artifacts\n\n**Note:** unlike the case of additional inputs, the components passed in `additional_outputs` must be already defined in your `gr.Blocks` context -- they are not rendered automatically. If you need to render them after your `gr.ChatInterface`, you can set `render=False` when they are first defined and then `.render()` them in the appropriate section of your `gr.Blocks()` as we do in the example above.\n\n", "heading1": "Additional Outputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "We mentioned earlier that in the simplest case, your chat function should return a `str` response, which will be rendered as Markdown in the chatbot. However, you can also return more complex responses as we discuss below:\n\n\n**Returning files or Gradio components**\n\nCurrently, the following Gradio components can be displayed inside the chat interface:\n* `gr.Image`\n* `gr.Plot`\n* `gr.Audio`\n* `gr.HTML`\n* `gr.Video`\n* `gr.Gallery`\n* `gr.File`\n\nSimply return one of these components from your function to use it with `gr.ChatInterface`. Here's an example that returns an audio file:\n\n```py\nimport gradio as gr\n\ndef music(message, history):\n if message.strip():\n return gr.Audio(\"https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav\")\n else:\n return \"Please provide the name of an artist\"\n\ngr.ChatInterface(\n music,\n textbox=gr.Textbox(placeholder=\"Which artist's music do you want to listen to?\", scale=7),\n).launch()\n```\n\nSimilarly, you could return image files with `gr.Image`, video files with `gr.Video`, or arbitrary files with the `gr.File` component.\n\n**Returning Multiple Messages**\n\nYou can return multiple assistant messages from your chat function simply by returning a `list` of messages, each of which is a valid chat type. This lets you, for example, send a message along with files, as in the following example:\n\n$code_chatinterface_echo_multimodal\n\n\n**Displaying intermediate thoughts or tool usage**\n\nThe `gr.ChatInterface` class supports displaying intermediate thoughts or tool usage direct in the chatbot.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/nested-thought.png)\n\n To do this, you will need to return a `gr.ChatMessage` object from your chat function. Here is the schema of the `gr.ChatMessage` data class as well as two internal typed dictionaries:\n \n ```py\nMessageContent = Union[str, FileDataDict, FileData, Component]\n\n@dataclass\nclass ChatMessage:\n content: Me", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ma of the `gr.ChatMessage` data class as well as two internal typed dictionaries:\n \n ```py\nMessageContent = Union[str, FileDataDict, FileData, Component]\n\n@dataclass\nclass ChatMessage:\n content: MessageContent | list[MessageContent]\n metadata: MetadataDict = None\n options: list[OptionDict] = None\n\nclass MetadataDict(TypedDict):\n title: NotRequired[str]\n id: NotRequired[int | str]\n parent_id: NotRequired[int | str]\n log: NotRequired[str]\n duration: NotRequired[float]\n status: NotRequired[Literal[\"pending\", \"done\"]]\n\nclass OptionDict(TypedDict):\n label: NotRequired[str]\n value: str\n ```\n \nAs you can see, the `gr.ChatMessage` dataclass is similar to the openai-style message format, e.g. it has a \"content\" key that refers to the chat message content. But it also includes a \"metadata\" key whose value is a dictionary. If this dictionary includes a \"title\" key, the resulting message is displayed as an intermediate thought with the title being displayed on top of the thought. Here's an example showing the usage:\n\n$code_chatinterface_thoughts\n\nYou can even show nested thoughts, which is useful for agent demos in which one tool may call other tools. To display nested thoughts, include \"id\" and \"parent_id\" keys in the \"metadata\" dictionary. Read our [dedicated guide on displaying intermediate thoughts and tool usage](/guides/agents-and-tool-usage) for more realistic examples.\n\n**Providing preset responses**\n\nWhen returning an assistant message, you may want to provide preset options that a user can choose in response. To do this, again, you will again return a `gr.ChatMessage` instance from your chat function. This time, make sure to set the `options` key specifying the preset responses.\n\nAs shown in the schema for `gr.ChatMessage` above, the value corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an opt", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": " corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an optional `label` (if provided, is the text displayed as the preset response instead of the `value`). \n\nThis example illustrates how to use preset responses:\n\n$code_chatinterface_options\n\n", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may wish to modify the value of the chatbot with your own events, other than those prebuilt in the `gr.ChatInterface`. For example, you could create a dropdown that prefills the chat history with certain conversations or add a separate button to clear the conversation history. The `gr.ChatInterface` supports these events, but you need to use the `gr.ChatInterface.chatbot_value` as the input or output component in such events. In this example, we use a `gr.Radio` component to prefill the the chatbot with certain conversations:\n\n$code_chatinterface_prefill\n\n", "heading1": "Modifying the Chatbot Value Directly", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Once you've built your Gradio chat interface and are hosting it on [Hugging Face Spaces](https://hf.space) or somewhere else, then you can query it with a simple API. The API route will be the name of the function you pass to the ChatInterface. So if `gr.ChatInterface(respond)`, then the API route is `/respond`. The endpoint just expects the user's message and will return the response, internally keeping track of the message history.\n\n![](https://github.com/gradio-app/gradio/assets/1778297/7b10d6db-6476-4e2e-bebd-ecda802c3b8f)\n\nTo use the endpoint, you should use either the [Gradio Python Client](/guides/getting-started-with-the-python-client) or the [Gradio JS client](/guides/getting-started-with-the-js-client). Or, you can deploy your Chat Interface to other platforms, such as a:\n\n* Slack bot [[tutorial]](../guides/creating-a-slack-bot-from-a-gradio-app)\n* Website widget [[tutorial]](../guides/creating-a-website-widget-from-a-gradio-chatbot)\n\n", "heading1": "Using Your Chatbot via API", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You can enable persistent chat history for your ChatInterface, allowing users to maintain multiple conversations and easily switch between them. When enabled, conversations are stored locally and privately in the user's browser using local storage. So if you deploy a ChatInterface e.g. on [Hugging Face Spaces](https://hf.space), each user will have their own separate chat history that won't interfere with other users' conversations. This means multiple users can interact with the same ChatInterface simultaneously while maintaining their own private conversation histories.\n\nTo enable this feature, simply set `gr.ChatInterface(save_history=True)` (as shown in the example in the next section). Users will then see their previous conversations in a side panel and can continue any previous chat or start a new one.\n\n", "heading1": "Chat History", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "To gather feedback on your chat model, set `gr.ChatInterface(flagging_mode=\"manual\")` and users will be able to thumbs-up or thumbs-down assistant responses. Each flagged response, along with the entire chat history, will get saved in a CSV file in the app working directory (this can be configured via the `flagging_dir` parameter). \n\nYou can also change the feedback options via `flagging_options` parameter. The default options are \"Like\" and \"Dislike\", which appear as the thumbs-up and thumbs-down icons. Any other options appear under a dedicated flag icon. This example shows a ChatInterface that has both chat history (mentioned in the previous section) and user feedback enabled:\n\n$code_chatinterface_streaming_echo\n\nNote that in this example, we set several flagging options: \"Like\", \"Spam\", \"Inappropriate\", \"Other\". Because the case-sensitive string \"Like\" is one of the flagging options, the user will see a thumbs-up icon next to each assistant message. The three other flagging options will appear in a dropdown under the flag icon.\n\n", "heading1": "Collecting User Feedback", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Now that you've learned about the `gr.ChatInterface` class and how it can be used to create chatbot UIs quickly, we recommend reading one of the following:\n\n* [Our next Guide](../guides/chatinterface-examples) shows examples of how to use `gr.ChatInterface` with popular LLM libraries.\n* If you'd like to build very custom chat applications from scratch, you can build them using the low-level Blocks API, as [discussed in this Guide](../guides/creating-a-custom-chatbot-with-blocks).\n* Once you've deployed your Gradio Chat Interface, its easy to use in other applications because of the built-in API. Here's a tutorial on [how to deploy a Gradio chat interface as a Discord bot](../guides/creating-a-discord-bot-from-a-gradio-app).\n\n\n", "heading1": "What's Next?", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "An MCP (Model Control Protocol) server is a standardized way to expose tools so that they can be used by LLMs. A tool can provide an LLM functionality that it does not have natively, such as the ability to generate images or calculate the prime factors of a number. \n\n", "heading1": "What is an MCP Server?", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "LLMs are famously not great at counting the number of letters in a word (e.g. the number of \"r\"-s in \"strawberry\"). But what if we equip them with a tool to help? Let's start by writing a simple Gradio app that counts the number of letters in a word or phrase:\n\n$code_letter_counter\n\nNotice that we have: (1) included a detailed docstring for our function, and (2) set `mcp_server=True` in `.launch()`. This is all that's needed for your Gradio app to serve as an MCP server! Now, when you run this app, it will:\n\n1. Start the regular Gradio web interface\n2. Start the MCP server\n3. Print the MCP server URL in the console\n\nThe MCP server will be accessible at:\n```\nhttp://your-server:port/gradio_api/mcp/\n```\n\nGradio automatically converts the `letter_counter` function into an MCP tool that can be used by LLMs. The docstring of the function and the type hints of arguments will be used to generate the description of the tool and its parameters. The name of the function will be used as the name of your tool. Any initial values you provide to your input components (e.g. \"strawberry\" and \"r\" in the `gr.Textbox` components above) will be used as the default values if your LLM doesn't specify a value for that particular input parameter.\n\nNow, all you need to do is add this URL endpoint to your MCP Client (e.g. Claude Desktop, Cursor, or Cline), which typically means pasting this config in the settings:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"http://your-server:port/gradio_api/mcp/\"\n }\n }\n}\n```\n\n(By the way, you can find the exact config to copy-paste by going to the \"View API\" link in the footer of your Gradio app, and then clicking on \"MCP\").\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)\n\n", "heading1": "Example: Counting Letters in a Word", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit http://your-server:port/gradio_api/mcp/schema or go to the \"View API\" link in the footer of your Gradio app, and then click on \"MCP\".\n\n\n2. **Environment variable support**. There are two ways to enable the MCP server functionality:\n\n* Using the `mcp_server` parameter, as shown above:\n ```python\n demo.launch(mcp_server=True)\n ```\n\n* Using environment variables:\n ```bash\n export GRADIO_MCP_SERVER=True\n ```\n\n3. **File Handling**: The Gradio MCP server automatically handles file data conversions, including:\n - Processing image files and returning them in the correct format\n - Managing temporary file storage\n\n By default, the Gradio MCP server accepts input images and files as full URLs (\"http://...\" or \"https:/...\"). For convenience, an additional STDIO-based MCP server is also generated, which can be used to upload files to any remote Gradio app and which returns a URL that can be used for subsequent tool calls.\n\n4. **Hosted MCP Servers on \udb40\udc20\ud83e\udd17 Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools. Notice that you can add this config to your MCP Client to start using the tools from this Space immediately:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/\"\n }\n }\n}\n```\n\n\n\n\n", "heading1": "Key features of the Gradio <> MCP Integration", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "If there's an existing Space that you'd like to use an MCP server, you'll need to do three things:\n\n1. First, [duplicate the Space](https://huggingface.co/docs/hub/en/spaces-more-ways-to-createduplicating-a-space) if it is not your own Space. This will allow you to make changes to the app. If the Space requires a GPU, set the hardware of the duplicated Space to be same as the original Space. You can make it either a public Space or a private Space, since it is possible to use either as an MCP server, as described below.\n2. Then, add docstrings to the functions that you'd like the LLM to be able to call as a tool. The docstring should be in the same format as the example code above.\n3. Finally, add `mcp_server=True` in `.launch()`.\n\nThat's it!\n\n", "heading1": "Converting an Existing Space", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "You can use either a public Space or a private Space as an MCP server. If you'd like to use a private Space as an MCP server (or a ZeroGPU Space with your own quota), then you will need to provide your [Hugging Face token](https://huggingface.co/settings/token) when you make your request. To do this, simply add it as a header in your config like this:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/\",\n \"headers\": {\n \"Authorization\": \"Bearer \"\n }\n }\n }\n}\n```\n\n", "heading1": "Private Spaces", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "You may wish to authenticate users more precisely or let them provide other kinds of credentials or tokens in order to provide a custom experience for different users. \n\nGradio allows you to access the underlying `starlette.Request` that has made the tool call, which means that you can access headers, originating IP address, or any other information that is part of the network request. To do this, simply add a parameter in your function of the type `gr.Request`, and Gradio will automatically inject the request object as the parameter.\n\nHere's an example:\n\n```py\nimport gradio as gr\n\ndef echo_headers(x, request: gr.Request):\n return str(dict(request.headers))\n\ngr.Interface(echo_headers, \"textbox\", \"textbox\").launch(mcp_server=True)\n```\n\nThis MCP server will simply ignore the user's input and echo back all of the headers from a user's request. One can build more complex apps using the same idea. See the [docs on `gr.Request`](https://www.gradio.app/main/docs/gradio/request) for more information (note that only the core Starlette attributes of the `gr.Request` object will be present, attributes such as Gradio's `.session_hash` will not be present).\n\nUsing the gr.Header class\n\nA common pattern in MCP server development is to use authentication headers to call services on behalf of your users. Instead of using a `gr.Request` object like in the example above, you can use a `gr.Header` argument. Gradio will automatically extract that header from the incoming request (if it exists) and pass it to your function.\n\nIn the example below, the `X-API-Token` header is extracted from the incoming request and passed in as the `x_api_token` argument to `make_api_request_on_behalf_of_user`.\n\nThe benefit of using `gr.Header` is that the MCP connection docs will automatically display the headers you need to supply when connecting to the server! See the image below:\n\n```python\nimport gradio as gr\n\ndef make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):\n \"\"\"M", "heading1": "Authentication and Credentials", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "the headers you need to supply when connecting to the server! See the image below:\n\n```python\nimport gradio as gr\n\ndef make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):\n \"\"\"Make a request to everyone's favorite API.\n Args:\n prompt: The prompt to send to the API.\n Returns:\n The response from the API.\n Raises:\n AssertionError: If the API token is not valid.\n \"\"\"\n return \"Hello from the API\" if not x_api_token else \"Hello from the API with token!\"\n\n\ndemo = gr.Interface(\n make_api_request_on_behalf_of_user,\n [\n gr.Textbox(label=\"Prompt\"),\n ],\n gr.Textbox(label=\"Response\"),\n)\n\ndemo.launch(mcp_server=True)\n```\n\n![MCP Header Connection Page](https://github.com/user-attachments/assets/e264eedf-a91a-476b-880d-5be0d5934134)\n\nSending Progress Updates\n\nThe Gradio MCP server automatically sends progress updates to your MCP Client based on the queue in the Gradio application. If you'd like to send custom progress updates, you can do so using the same mechanism as you would use to display progress updates in the UI of your Gradio app: by using the `gr.Progress` class!\n\nHere's an example of how to do this:\n\n$code_mcp_progress\n\n[Here are the docs](https://www.gradio.app/docs/gradio/progress) for the `gr.Progress` class, which can also automatically track `tqdm` calls.\n\nNote: by default, progress notifications are enabled for all MCP tools, even if the corresponding Gradio functions do not include a `gr.Progress`. However, this can add some overhead to the MCP tool (typically ~500ms). To disable progress notification, you can set `queue=False` in your Gradio event handler to skip the overhead related to subscribing to the queue's progress updates.\n\n\n", "heading1": "Authentication and Credentials", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "Gradio automatically sets the tool name based on the name of your function, and the description from the docstring of your function. But you may want to change how the description appears to your LLM. You can do this by using the `api_description` parameter in `Interface`, `ChatInterface`, or any event listener. This parameter takes three different kinds of values:\n\n* `None` (default): the tool description is automatically created from the docstring of the function (or its parent's docstring if it does not have a docstring but inherits from a method that does.)\n* `False`: no tool description appears to the LLM.\n* `str`: an arbitrary string to use as the tool description.\n\nIn addition to modifying the tool descriptions, you can also toggle which tools appear to the LLM. You can do this by setting the `show_api` parameter, which is by default `True`. Setting it to `False` hides the endpoint from the API docs and from the MCP server. If you expose multiple tools, users of your app will also be able to toggle which tools they'd like to add to their MCP server by checking boxes in the \"view MCP or API\" panel.\n\nHere's an example that shows the `api_description` and `show_api` parameters in actions:\n\n$code_mcp_tools\n\n\n\n", "heading1": "Modifying Tool Descriptions", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "In addition to tools (which execute functions generally and are the default for any function exposed through the Gradio MCP integration), MCP supports two other important primitives: **resources** (for exposing data) and **prompts** (for defining reusable templates). Gradio provides decorators to easily create MCP servers with all three capabilities.\n\n\nCreating MCP Resources\n\nUse the `@gr.mcp.resource` decorator on any function to expose data through your Gradio app. Resources can be static (always available at a fixed URI) or templated (with parameters in the URI).\n\n$code_mcp_resources_and_prompts\n\nIn this example:\n- The `get_greeting` function is exposed as a resource with a URI template `greeting://{name}`\n- When an MCP client requests `greeting://Alice`, it receives \"Hello, Alice!\"\n- Resources can also return images and other types of files or binary data. In order to return non-text data, you should specify the `mime_type` parameter in `@gr.mcp.resource()` and return a Base64 string from your function.\n\nCreating MCP Prompts \n\nPrompts help standardize how users interact with your tools. They're especially useful for complex workflows that require specific formatting or multiple steps.\n\nThe `greet_user` function in the example above is decorated with `@gr.mcp.prompt()`, which:\n- Makes it available as a prompt template in MCP clients\n- Accepts parameters (`name` and `style`) to customize the output\n- Returns a structured prompt that guides the LLM's behavior\n\n\n", "heading1": "MCP Resources and Prompts", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "So far, all of our MCP tools, resources, or prompts have corresponded to event listeners in the UI. This works well for functions that directly update the UI, but may not work if you wish to expose a \"pure logic\" function that should return raw data (e.g. a JSON object) without directly causing a UI update.\n\nIn order to expose such an MCP tool, you can create a pure Gradio API endpoint using `gr.api` (see [full docs here](https://www.gradio.app/main/docs/gradio/api)). Here's an example of creating an MCP tool that slices a list:\n\n$code_mcp_tool_only\n\nNote that if you use this approach, your function signature must be fully typed, including the return value, as these signature are used to determine the typing information for the MCP tool.\n\n", "heading1": "Adding MCP-Only Functions", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "In some cases, you may decide not to use Gradio's built-in integration and instead manually create an FastMCP Server that calls a Gradio app. This approach is useful when you want to:\n\n- Store state / identify users between calls instead of treating every tool call completely independently\n- Start the Gradio app MCP server when a tool is called (if you are running multiple Gradio apps locally and want to save memory / GPU)\n\nThis is very doable thanks to the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) and the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)'s `FastMCP` class. Here's an example of creating a custom MCP server that connects to various Gradio apps hosted on [HuggingFace Spaces](https://huggingface.co/spaces) using the `stdio` protocol:\n\n```python\nfrom mcp.server.fastmcp import FastMCP\nfrom gradio_client import Client\nimport sys\nimport io\nimport json \n\nmcp = FastMCP(\"gradio-spaces\")\n\nclients = {}\n\ndef get_client(space_id: str) -> Client:\n \"\"\"Get or create a Gradio client for the specified space.\"\"\"\n if space_id not in clients:\n clients[space_id] = Client(space_id)\n return clients[space_id]\n\n\n@mcp.tool()\nasync def generate_image(prompt: str, space_id: str = \"ysharma/SanaSprint\") -> str:\n \"\"\"Generate an image using Flux.\n \n Args:\n prompt: Text prompt describing the image to generate\n space_id: HuggingFace Space ID to use \n \"\"\"\n client = get_client(space_id)\n result = client.predict(\n prompt=prompt,\n model_size=\"1.6B\",\n seed=0,\n randomize_seed=True,\n width=1024,\n height=1024,\n guidance_scale=4.5,\n num_inference_steps=2,\n api_name=\"/infer\"\n )\n return result\n\n\n@mcp.tool()\nasync def run_dia_tts(prompt: str, space_id: str = \"ysharma/Dia-1.6B\") -> str:\n \"\"\"Text-to-Speech Synthesis.\n \n Args:\n prompt: Text prompt describing the co", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "return result\n\n\n@mcp.tool()\nasync def run_dia_tts(prompt: str, space_id: str = \"ysharma/Dia-1.6B\") -> str:\n \"\"\"Text-to-Speech Synthesis.\n \n Args:\n prompt: Text prompt describing the conversation between speakers S1, S2\n space_id: HuggingFace Space ID to use \n \"\"\"\n client = get_client(space_id)\n result = client.predict(\n text_input=f\"\"\"{prompt}\"\"\",\n audio_prompt_input=None, \n max_new_tokens=3072,\n cfg_scale=3,\n temperature=1.3,\n top_p=0.95,\n cfg_filter_top_k=30,\n speed_factor=0.94,\n api_name=\"/generate_audio\"\n )\n return result\n\n\nif __name__ == \"__main__\":\n import sys\n import io\n sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')\n \n mcp.run(transport='stdio')\n```\n\nThis server exposes two tools:\n1. `run_dia_tts` - Generates a conversation for the given transcript in the form of `[S1]first-sentence. [S2]second-sentence. [S1]...`\n2. `generate_image` - Generates images using a fast text-to-image model\n\nTo use this MCP Server with Claude Desktop (as MCP Client):\n\n1. Save the code to a file (e.g., `gradio_mcp_server.py`)\n2. Install the required dependencies: `pip install mcp gradio-client`\n3. Configure Claude Desktop to use your server by editing the configuration file at `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\\Claude\\claude_desktop_config.json` (Windows):\n\n```json\n{\n \"mcpServers\": {\n \"gradio-spaces\": {\n \"command\": \"python\",\n \"args\": [\n \"/absolute/path/to/gradio_mcp_server.py\"\n ]\n }\n }\n}\n```\n\n4. Restart Claude Desktop\n\nNow, when you ask Claude about generating an image or transcribing audio, it can use your Gradio-powered tools to accomplish these tasks.\n\n\n", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "use your Gradio-powered tools to accomplish these tasks.\n\n\n", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "The MCP protocol is still in its infancy and you might see issues connecting to an MCP Server that you've built. We generally recommend using the [MCP Inspector Tool](https://github.com/modelcontextprotocol/inspector) to try connecting and debugging your MCP Server.\n\nHere are some things that may help:\n\n**1. Ensure that you've provided type hints and valid docstrings for your functions**\n\nAs mentioned earlier, Gradio reads the docstrings for your functions and the type hints of input arguments to generate the description of the tool and parameters. A valid function and docstring looks like this (note the \"Args:\" block with indented parameter names underneath):\n\n```py\ndef image_orientation(image: Image.Image) -> str:\n \"\"\"\n Returns whether image is portrait or landscape.\n\n Args:\n image (Image.Image): The image to check.\n \"\"\"\n return \"Portrait\" if image.height > image.width else \"Landscape\"\n```\n\nNote: You can preview the schema that is created for your MCP server by visiting the `http://your-server:port/gradio_api/mcp/schema` URL.\n\n**2. Try accepting input arguments as `str`**\n\nSome MCP Clients do not recognize parameters that are numeric or other complex types, but all of the MCP Clients that we've tested accept `str` input parameters. When in doubt, change your input parameter to be a `str` and then cast to a specific type in the function, as in this example:\n\n```py\ndef prime_factors(n: str):\n \"\"\"\n Compute the prime factorization of a positive integer.\n\n Args:\n n (str): The integer to factorize. Must be greater than 1.\n \"\"\"\n n_int = int(n)\n if n_int <= 1:\n raise ValueError(\"Input must be an integer greater than 1.\")\n\n factors = []\n while n_int % 2 == 0:\n factors.append(2)\n n_int //= 2\n\n divisor = 3\n while divisor * divisor <= n_int:\n while n_int % divisor == 0:\n factors.append(divisor)\n n_int //= divisor\n divisor += 2\n\n if n_int > 1:\n factors.", "heading1": "Troubleshooting your MCP Servers", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "= 3\n while divisor * divisor <= n_int:\n while n_int % divisor == 0:\n factors.append(divisor)\n n_int //= divisor\n divisor += 2\n\n if n_int > 1:\n factors.append(n_int)\n\n return factors\n```\n\n**3. Ensure that your MCP Client Supports Streamable HTTP**\n\nSome MCP Clients do not yet support streamable HTTP-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install [Node.js](https://nodejs.org/en/download/). Then, add the following to your own MCP Client config:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"command\": \"npx\",\n \"args\": [\n \"mcp-remote\",\n \"http://your-server:port/gradio_api/mcp/\"\n ]\n }\n }\n}\n```\n\n**4. Restart your MCP Client and MCP Server**\n\nSome MCP Clients require you to restart them every time you update the MCP configuration. Other times, if the connection between the MCP Client and servers breaks, you might need to restart the MCP server. If all else fails, try restarting both your MCP Client and MCP Servers!\n\n", "heading1": "Troubleshooting your MCP Servers", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "The Model Context Protocol (MCP) standardizes how applications provide context to LLMs. It allows Claude to interact with external tools, like image generators, file systems, or APIs, etc.\n\n", "heading1": "What is MCP?", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "- Python 3.10+\n- An Anthropic API key\n- Basic understanding of Python programming\n\n", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "First, install the required packages:\n\n```bash\npip install gradio anthropic mcp\n```\n\nCreate a `.env` file in your project directory and add your Anthropic API key:\n\n```\nANTHROPIC_API_KEY=your_api_key_here\n```\n\n", "heading1": "Setup", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "The server provides tools that Claude can use. In this example, we'll create a server that generates images through [a HuggingFace space](https://huggingface.co/spaces/ysharma/SanaSprint).\n\nCreate a file named `gradio_mcp_server.py`:\n\n```python\nfrom mcp.server.fastmcp import FastMCP\nimport json\nimport sys\nimport io\nimport time\nfrom gradio_client import Client\n\nsys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')\nsys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')\n\nmcp = FastMCP(\"huggingface_spaces_image_display\")\n\n@mcp.tool()\nasync def generate_image(prompt: str, width: int = 512, height: int = 512) -> str:\n \"\"\"Generate an image using SanaSprint model.\n \n Args:\n prompt: Text prompt describing the image to generate\n width: Image width (default: 512)\n height: Image height (default: 512)\n \"\"\"\n client = Client(\"https://ysharma-sanasprint.hf.space/\")\n \n try:\n result = client.predict(\n prompt,\n \"0.6B\",\n 0,\n True,\n width,\n height,\n 4.0,\n 2,\n api_name=\"/infer\"\n )\n \n if isinstance(result, list) and len(result) >= 1:\n image_data = result[0]\n if isinstance(image_data, dict) and \"url\" in image_data:\n return json.dumps({\n \"type\": \"image\",\n \"url\": image_data[\"url\"],\n \"message\": f\"Generated image for prompt: {prompt}\"\n })\n \n return json.dumps({\n \"type\": \"error\",\n \"message\": \"Failed to generate image\"\n })\n \n except Exception as e:\n return json.dumps({\n \"type\": \"error\",\n \"message\": f\"Error generating image: {str(e)}\"\n })\n\nif __name__ == \"__main__\":\n mcp.run(transport='stdio')\n```\n\nWhat this server does:\n\n1. It creates an MCP server that exposes a `gene", "heading1": "Part 1: Building the MCP Server", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " \"message\": f\"Error generating image: {str(e)}\"\n })\n\nif __name__ == \"__main__\":\n mcp.run(transport='stdio')\n```\n\nWhat this server does:\n\n1. It creates an MCP server that exposes a `generate_image` tool\n2. The tool connects to the SanaSprint model hosted on HuggingFace Spaces\n3. It handles the asynchronous nature of image generation by polling for results\n4. When an image is ready, it returns the URL in a structured JSON format\n\n", "heading1": "Part 1: Building the MCP Server", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now let's create a Gradio chat interface as MCP Client that connects Claude to our MCP server.\n\nCreate a file named `app.py`:\n\n```python\nimport asyncio\nimport os\nimport json\nfrom typing import List, Dict, Any, Union\nfrom contextlib import AsyncExitStack\n\nimport gradio as gr\nfrom gradio.components.chatbot import ChatMessage\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom anthropic import Anthropic\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nloop = asyncio.new_event_loop()\nasyncio.set_event_loop(loop)\n\nclass MCPClientWrapper:\n def __init__(self):\n self.session = None\n self.exit_stack = None\n self.anthropic = Anthropic()\n self.tools = []\n \n def connect(self, server_path: str) -> str:\n return loop.run_until_complete(self._connect(server_path))\n \n async def _connect(self, server_path: str) -> str:\n if self.exit_stack:\n await self.exit_stack.aclose()\n \n self.exit_stack = AsyncExitStack()\n \n is_python = server_path.endswith('.py')\n command = \"python\" if is_python else \"node\"\n \n server_params = StdioServerParameters(\n command=command,\n args=[server_path],\n env={\"PYTHONIOENCODING\": \"utf-8\", \"PYTHONUNBUFFERED\": \"1\"}\n )\n \n stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))\n self.stdio, self.write = stdio_transport\n \n self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))\n await self.session.initialize()\n \n response = await self.session.list_tools()\n self.tools = [{ \n \"name\": tool.name,\n \"description\": tool.description,\n \"input_schema\": tool.inputSchema\n } for tool in response.tools]\n \n tool_names = [tool[\"name\"] for tool in self.tools]\n return f\"Connected to MCP server.", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "iption,\n \"input_schema\": tool.inputSchema\n } for tool in response.tools]\n \n tool_names = [tool[\"name\"] for tool in self.tools]\n return f\"Connected to MCP server. Available tools: {', '.join(tool_names)}\"\n \n def process_message(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]) -> tuple:\n if not self.session:\n return history + [\n {\"role\": \"user\", \"content\": message}, \n {\"role\": \"assistant\", \"content\": \"Please connect to an MCP server first.\"}\n ], gr.Textbox(value=\"\")\n \n new_messages = loop.run_until_complete(self._process_query(message, history))\n return history + [{\"role\": \"user\", \"content\": message}] + new_messages, gr.Textbox(value=\"\")\n \n async def _process_query(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]):\n claude_messages = []\n for msg in history:\n if isinstance(msg, ChatMessage):\n role, content = msg.role, msg.content\n else:\n role, content = msg.get(\"role\"), msg.get(\"content\")\n \n if role in [\"user\", \"assistant\", \"system\"]:\n claude_messages.append({\"role\": role, \"content\": content})\n \n claude_messages.append({\"role\": \"user\", \"content\": message})\n \n response = self.anthropic.messages.create(\n model=\"claude-3-5-sonnet-20241022\",\n max_tokens=1000,\n messages=claude_messages,\n tools=self.tools\n )\n\n result_messages = []\n \n for content in response.content:\n if content.type == 'text':\n result_messages.append({\n \"role\": \"assistant\", \n \"content\": content.text\n })\n \n elif content.type == 'tool_use':\n tool_name = content.name\n tool_args = content.input\n ", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "ntent\": content.text\n })\n \n elif content.type == 'tool_use':\n tool_name = content.name\n tool_args = content.input\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": f\"I'll use the {tool_name} tool to help answer your question.\",\n \"metadata\": {\n \"title\": f\"Using tool: {tool_name}\",\n \"log\": f\"Parameters: {json.dumps(tool_args, ensure_ascii=True)}\",\n \"status\": \"pending\",\n \"id\": f\"tool_call_{tool_name}\"\n }\n })\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```json\\n\" + json.dumps(tool_args, indent=2, ensure_ascii=True) + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"tool_call_{tool_name}\",\n \"id\": f\"params_{tool_name}\",\n \"title\": \"Tool Parameters\"\n }\n })\n \n result = await self.session.call_tool(tool_name, tool_args)\n \n if result_messages and \"metadata\" in result_messages[-2]:\n result_messages[-2][\"metadata\"][\"status\"] = \"done\"\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"Here are the results from the tool:\",\n \"metadata\": {\n \"title\": f\"Tool Result for {tool_name}\",\n \"status\": \"done\",\n \"id\": f\"result_{tool_name}\"\n }\n })\n \n result_content = result.content\n if isinstance(result_content, list):\n result_content = \"\\n\".join(str(item) for item in re", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " })\n \n result_content = result.content\n if isinstance(result_content, list):\n result_content = \"\\n\".join(str(item) for item in result_content)\n \n try:\n result_json = json.loads(result_content)\n if isinstance(result_json, dict) and \"type\" in result_json:\n if result_json[\"type\"] == \"image\" and \"url\" in result_json:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": {\"path\": result_json[\"url\"], \"alt_text\": result_json.get(\"message\", \"Generated image\")},\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"image_{tool_name}\",\n \"title\": \"Generated Image\"\n }\n })\n else:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```\\n\" + result_content + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n except:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```\\n\" + result_content + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n ", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n \n claude_messages.append({\"role\": \"user\", \"content\": f\"Tool result for {tool_name}: {result_content}\"})\n next_response = self.anthropic.messages.create(\n model=\"claude-3-5-sonnet-20241022\",\n max_tokens=1000,\n messages=claude_messages,\n )\n \n if next_response.content and next_response.content[0].type == 'text':\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": next_response.content[0].text\n })\n\n return result_messages\n\nclient = MCPClientWrapper()\n\ndef gradio_interface():\n with gr.Blocks(title=\"MCP Weather Client\") as demo:\n gr.Markdown(\"MCP Weather Assistant\")\n gr.Markdown(\"Connect to your MCP weather server and chat with the assistant\")\n \n with gr.Row(equal_height=True):\n with gr.Column(scale=4):\n server_path = gr.Textbox(\n label=\"Server Script Path\",\n placeholder=\"Enter path to server script (e.g., weather.py)\",\n value=\"gradio_mcp_server.py\"\n )\n with gr.Column(scale=1):\n connect_btn = gr.Button(\"Connect\")\n \n status = gr.Textbox(label=\"Connection Status\", interactive=False)\n \n chatbot = gr.Chatbot(\n value=[], \n height=500,\n show_copy_button=True,\n avatar_images=(\"\ud83d\udc64\", \"\ud83e\udd16\")\n )\n \n with gr.Row(equal_height=True):\n msg = gr.Textbox(\n label=\"Your Question\",\n placeholder=\"Ask about weather or alerts (e.g., What's the weather in New York?)\",\n ", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "w(equal_height=True):\n msg = gr.Textbox(\n label=\"Your Question\",\n placeholder=\"Ask about weather or alerts (e.g., What's the weather in New York?)\",\n scale=4\n )\n clear_btn = gr.Button(\"Clear Chat\", scale=1)\n \n connect_btn.click(client.connect, inputs=server_path, outputs=status)\n msg.submit(client.process_message, [msg, chatbot], [chatbot, msg])\n clear_btn.click(lambda: [], None, chatbot)\n \n return demo\n\nif __name__ == \"__main__\":\n if not os.getenv(\"ANTHROPIC_API_KEY\"):\n print(\"Warning: ANTHROPIC_API_KEY not found in environment. Please set it in your .env file.\")\n \n interface = gradio_interface()\n interface.launch(debug=True)\n```\n\nWhat this MCP Client does:\n\n- Creates a friendly Gradio chat interface for user interaction\n- Connects to the MCP server you specify\n- Handles conversation history and message formatting\n- Makes call to Claude API with tool definitions\n- Processes tool usage requests from Claude\n- Displays images and other tool outputs in the chat\n- Sends tool results back to Claude for interpretation\n\n", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "To run your MCP application:\n\n- Start a terminal window and run the MCP Client:\n ```bash\n python app.py\n ```\n- Open the Gradio interface at the URL shown (typically http://127.0.0.1:7860)\n- In the Gradio interface, you'll see a field for the MCP Server path. It should default to `gradio_mcp_server.py`.\n- Click \"Connect\" to establish the connection to the MCP server.\n- You should see a message indicating the server connection was successful.\n\n", "heading1": "Running the Application", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now you can chat with Claude and it will be able to generate images based on your descriptions.\n\nTry prompts like:\n- \"Can you generate an image of a mountain landscape at sunset?\"\n- \"Create an image of a cool tabby cat\"\n- \"Generate a picture of a panda wearing sunglasses\"\n\nClaude will recognize these as image generation requests and automatically use the `generate_image` tool from your MCP server.\n\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Here's the high-level flow of what happens during a chat session:\n\n1. Your prompt enters the Gradio interface\n2. The client forwards your prompt to Claude\n3. Claude analyzes the prompt and decides to use the `generate_image` tool\n4. The client sends the tool call to the MCP server\n5. The server calls the external image generation API\n6. The image URL is returned to the client\n7. The client sends the image URL back to Claude\n8. Claude provides a response that references the generated image\n9. The Gradio chat interface displays both Claude's response and the image\n\n\n", "heading1": "How it Works", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now that you have a working MCP system, here are some ideas to extend it:\n\n- Add more tools to your server\n- Improve error handling \n- Add private Huggingface Spaces with authentication for secure tool access\n- Create custom tools that connect to your own APIs or services\n- Implement streaming responses for better user experience\n\n", "heading1": "Next Steps", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Congratulations! You've successfully built an MCP Client and Server that allows Claude to generate images based on text prompts. This is just the beginning of what you can do with Gradio and MCP. This guide enables you to build complex AI applications that can use Claude or any other powerful LLM to interact with virtually any external tool or service.\n\nRead our other Guide on using [Gradio apps as MCP Servers](./building-mcp-server-with-gradio).\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "As of version 5.36.0, Gradio now comes with a built-in MCP server that can upload files to a running Gradio application. In the `View API` page of the server, you should see the following code snippet if any of the tools require file inputs:\n\n\n\nThe command to start the MCP server takes two arguments:\n\n- The URL (or Hugging Face space id) of the gradio application to upload the files to. In this case, `http://127.0.0.1:7860`.\n- The local directory on your computer with which the server is allowed to upload files from (``). For security, please make this directory as narrow as possible to prevent unintended file uploads.\n\nAs stated in the image, you need to install [uv](https://docs.astral.sh/uv/getting-started/installation/) (a python package manager that can run python scripts) before connecting from your MCP client. \n\nIf you have gradio installed locally and you don't want to install uv, you can replace the `uvx` command with the path to gradio binary. It should look like this:\n\n```json\n\"upload-files\": {\n \"command\": \"\",\n \"args\": [\n \"upload-mcp\",\n \"http://localhost:7860/\",\n \"/Users/freddyboulton/Pictures\"\n ]\n}\n```\n\nAfter connecting to the upload server, your LLM agent will know when to upload files for you automatically!\n\n\n\n", "heading1": "Using the File Upload MCP Server", "source_page_url": "https://gradio.app/guides/file-upload-mcp", "source_page_title": "Mcp - File Upload Mcp Guide"}, {"text": "In this guide, we've covered how you can connect to the Upload File MCP Server so that your agent can upload files before using Gradio MCP servers. Remember to set the `` as small as possible to prevent unintended file uploads!\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/file-upload-mcp", "source_page_title": "Mcp - File Upload Mcp Guide"}, {"text": "If you're using LLMs in your workflow, adding this server will augment them with just the right context on gradio - which makes your experience a lot faster and smoother. \n\n\n\nThe server is running on Spaces and was launched entirely using Gradio, you can see all the code [here](https://huggingface.co/spaces/gradio/docs-mcp). For more on building an mcp server with gradio, see the [previous guide](./building-an-mcp-client-with-gradio). \n\n", "heading1": "Why an MCP Server?", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "For clients that support streamable HTTP (e.g. Cursor, Windsurf, Cline), simply add the following configuration to your MCP config:\n\n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/\"\n }\n }\n}\n```\n\nWe've included step-by-step instructions for Cursor below, but you can consult the docs for Windsurf [here](https://docs.windsurf.com/windsurf/mcp), and Cline [here](https://docs.cline.bot/mcp-servers/configuring-mcp-servers) which are similar to set up. \n\n\n\nCursor \n\n1. Make sure you're using the latest version of Cursor, and go to Cursor > Settings > Cursor Settings > MCP \n2. Click on '+ Add new global MCP server' \n3. Copy paste this json into the file that opens and then save it. \n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/\"\n }\n }\n}\n```\n4. That's it! You should see the tools load and the status go green in the settings page. You may have to click the refresh icon or wait a few seconds. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/cursor-mcp.png)\n\nClaude Desktop\n\n1. Since Claude Desktop only supports stdio, you will need to [install Node.js](https://nodejs.org/en/download/) to get this to work. \n2. Make sure you're using the latest version of Claude Desktop, and go to Claude > Settings > Developer > Edit Config \n3. Open the file with your favorite editor and copy paste this json, then save the file. \n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"command\": \"npx\",\n \"args\": [\n \"mcp-remote\",\n \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/\"\n ]\n }\n }\n}\n```\n4. Quit and re-open Claude Desktop, and you should be good to go. You should see it loaded in the Search and Tools icon or on the developer settings page. \n \n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/claude-desktop-mcp.gif)\n\n", "heading1": "Installing in the Clients", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "There are currently only two tools in the server: `gradio_docs_mcp_load_gradio_docs` and `gradio_docs_mcp_search_gradio_docs`. \n\n1. `gradio_docs_mcp_load_gradio_docs`: This tool takes no arguments and will load an /llms.txt style summary of Gradio's latest, full documentation. Very useful context the LLM can parse before answering questions or generating code. \n\n2. `gradio_docs_mcp_search_gradio_docs`: This tool takes a query as an argument and will run embedding search on Gradio's docs, guides, and demos to return the most useful context for the LLM to parse.", "heading1": "Tools", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "**Prerequisite**: Gradio requires [Python 3.10 or higher](https://www.python.org/downloads/).\n\n\nWe recommend installing Gradio using `pip`, which is included by default in Python. Run this in your terminal or command prompt:\n\n```bash\npip install --upgrade gradio\n```\n\n\nTip: It is best to install Gradio in a virtual environment. Detailed installation instructions for all common operating systems are provided here. \n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "You can run Gradio in your favorite code editor, Jupyter notebook, Google Colab, or anywhere else you write Python. Let's write your first Gradio app:\n\n\n$code_hello_world_4\n\n\nTip: We shorten the imported name from gradio to gr. This is a widely adopted convention for better readability of code. \n\nNow, run your code. If you've written the Python code in a file named `app.py`, then you would run `python app.py` from the terminal.\n\nThe demo below will open in a browser on [http://localhost:7860](http://localhost:7860) if running from a file. If you are running within a notebook, the demo will appear embedded within the notebook.\n\n$demo_hello_world_4\n\nType your name in the textbox on the left, drag the slider, and then press the Submit button. You should see a friendly greeting on the right.\n\nTip: When developing locally, you can run your Gradio app in hot reload mode, which automatically reloads the Gradio app whenever you make changes to the file. To do this, simply type in gradio before the name of the file instead of python. In the example above, you would type: `gradio app.py` in your terminal. You can also enable vibe mode by using the --vibe flag, e.g. gradio --vibe app.py, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. Learn more in the Hot Reloading Guide.\n\n\n**Understanding the `Interface` Class**\n\nYou'll notice that in order to make your first demo, you created an instance of the `gr.Interface` class. The `Interface` class is designed to create demos for machine learning models which accept one or more inputs, and return one or more outputs. \n\nThe `Interface` class has three core arguments:\n\n- `fn`: the function to wrap a user interface (UI) around\n- `inputs`: the Gradio component(s) to use for the input. The num", "heading1": "Building Your First Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "turn one or more outputs. \n\nThe `Interface` class has three core arguments:\n\n- `fn`: the function to wrap a user interface (UI) around\n- `inputs`: the Gradio component(s) to use for the input. The number of components should match the number of arguments in your function.\n- `outputs`: the Gradio component(s) to use for the output. The number of components should match the number of return values from your function.\n\nThe `fn` argument is very flexible -- you can pass *any* Python function that you want to wrap with a UI. In the example above, we saw a relatively simple function, but the function could be anything from a music generator to a tax calculator to the prediction function of a pretrained machine learning model.\n\nThe `inputs` and `outputs` arguments take one or more Gradio components. As we'll see, Gradio includes more than [30 built-in components](https://www.gradio.app/docs/gradio/introduction) (such as the `gr.Textbox()`, `gr.Image()`, and `gr.HTML()` components) that are designed for machine learning applications. \n\nTip: For the `inputs` and `outputs` arguments, you can pass in the name of these components as a string (`\"textbox\"`) or an instance of the class (`gr.Textbox()`).\n\nIf your function accepts more than one argument, as is the case above, pass a list of input components to `inputs`, with each input component corresponding to one of the arguments of the function, in order. The same holds true if your function returns more than one value: simply pass in a list of components to `outputs`. This flexibility makes the `Interface` class a very powerful way to create demos.\n\nWe'll dive deeper into the `gr.Interface` on our series on [building Interfaces](https://www.gradio.app/main/guides/the-interface-class).\n\n", "heading1": "Building Your First Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "What good is a beautiful demo if you can't share it? Gradio lets you easily share a machine learning demo without having to worry about the hassle of hosting on a web server. Simply set `share=True` in `launch()`, and a publicly accessible URL will be created for your demo. Let's revisit our example demo, but change the last line as follows:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n \ndemo.launch(share=True) Share your demo with just 1 extra parameter \ud83d\ude80\n```\n\nWhen you run this code, a public URL will be generated for your demo in a matter of seconds, something like:\n\n\ud83d\udc49   `https://a23dsf231adb.gradio.live`\n\nNow, anyone around the world can try your Gradio demo from their browser, while the machine learning model and all computation continues to run locally on your computer.\n\nTo learn more about sharing your demo, read our dedicated guide on [sharing your Gradio application](https://www.gradio.app/guides/sharing-your-app).\n\n\n", "heading1": "Sharing Your Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "So far, we've been discussing the `Interface` class, which is a high-level class that lets you build demos quickly with Gradio. But what else does Gradio include?\n\nCustom Demos with `gr.Blocks`\n\nGradio offers a low-level approach for designing web apps with more customizable layouts and data flows with the `gr.Blocks` class. Blocks supports things like controlling where components appear on the page, handling multiple data flows and more complex interactions (e.g. outputs can serve as inputs to other functions), and updating properties/visibility of components based on user interaction \u2014 still all in Python. \n\nYou can build very custom and complex applications using `gr.Blocks()`. For example, the popular image generation [Automatic1111 Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is built using Gradio Blocks. We dive deeper into the `gr.Blocks` on our series on [building with Blocks](https://www.gradio.app/guides/blocks-and-event-listeners).\n\nChatbots with `gr.ChatInterface`\n\nGradio includes another high-level class, `gr.ChatInterface`, which is specifically designed to create Chatbot UIs. Similar to `Interface`, you supply a function and Gradio creates a fully working Chatbot UI. If you're interested in creating a chatbot, you can jump straight to [our dedicated guide on `gr.ChatInterface`](https://www.gradio.app/guides/creating-a-chatbot-fast).\n\nThe Gradio Python & JavaScript Ecosystem\n\nThat's the gist of the core `gradio` Python library, but Gradio is actually so much more! It's an entire ecosystem of Python and JavaScript libraries that let you build machine learning applications, or query them programmatically, in Python or JavaScript. Here are other related parts of the Gradio ecosystem:\n\n* [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.\n* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-", "heading1": "An Overview of Gradio", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": ".app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.\n* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-the-js-client) (`@gradio/client`): query any Gradio app programmatically in JavaScript.\n* [Hugging Face Spaces](https://huggingface.co/spaces): the most popular place to host Gradio applications \u2014 for free!\n\n", "heading1": "An Overview of Gradio", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "Keep learning about Gradio sequentially using the Gradio Guides, which include explanations as well as example code and embedded interactive demos. Next up: [let's dive deeper into the Interface class](https://www.gradio.app/guides/the-interface-class).\n\nOr, if you already know the basics and are looking for something specific, you can search the more [technical API documentation](https://www.gradio.app/docs/).\n\n\n", "heading1": "What's Next?", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "You can also build Gradio applications without writing any code. Simply type `gradio sketch` into your terminal to open up an editor that lets you define and modify Gradio components, adjust their layouts, add events, all through a web editor. Or [use this hosted version of Gradio Sketch, running on Hugging Face Spaces](https://huggingface.co/spaces/aliabid94/Sketch).", "heading1": "Gradio Sketch", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "The documentation will be generated when running `gradio cc build`. You can pass the `--no-generate-docs` argument to turn off this behaviour.\n\nThere is also a standalone `docs` command that allows for greater customisation. If you are running this command manually it should be run _after_ the `version` in your `pyproject.toml` has been bumped but before building the component.\n\nAll arguments are optional.\n\n```bash\ngradio cc docs\n path The directory of the custom component.\n --demo-dir Path to the demo directory.\n --demo-name Name of the demo file\n --space-url URL of the Hugging Face Space to link to\n --generate-space create a documentation space.\n --no-generate-space do not create a documentation space\n --readme-path Path to the README.md file.\n --generate-readme create a REAMDE.md file\n --no-generate-readme do not create a README.md file\n --suppress-demo-check suppress validation checks and warnings\n```\n\n", "heading1": "How do I use it?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "The `gradio cc docs` command will generate an interactive Gradio app and a static README file with various features. You can see an example here:\n\n- [Gradio app deployed on Hugging Face Spaces]()\n- [README.md rendered by GitHub]()\n\nThe README.md and space both have the following features:\n\n- A description.\n- Installation instructions.\n- A fully functioning code snippet.\n- Optional links to PyPi, GitHub, and Hugging Face Spaces.\n- API documentation including:\n - An argument table for component initialisation showing types, defaults, and descriptions.\n - A description of how the component affects the user's predict function.\n - A table of events and their descriptions.\n - Any additional interfaces or classes that may be used during initialisation or in the pre- or post- processors.\n\nAdditionally, the Gradio includes:\n\n- A live demo.\n- A richer, interactive version of the parameter tables.\n- Nicer styling!\n\n", "heading1": "What gets generated?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "The documentation generator uses existing standards to extract the necessary information, namely Type Hints and Docstrings. There are no Gradio-specific APIs for documentation, so following best practices will generally yield the best results.\n\nIf you already use type hints and docstrings in your component source code, you don't need to do much to benefit from this feature, but there are some details that you should be aware of.\n\nPython version\n\nTo get the best documentation experience, you need to use Python `3.10` or greater when generating documentation. This is because some introspection features used to generate the documentation were only added in `3.10`.\n\nType hints\n\nPython type hints are used extensively to provide helpful information for users. \n\n
\n What are type hints?\n\n\nIf you need to become more familiar with type hints in Python, they are a simple way to express what Python types are expected for arguments and return values of functions and methods. They provide a helpful in-editor experience, aid in maintenance, and integrate with various other tools. These types can be simple primitives, like `list` `str` `bool`; they could be more compound types like `list[str]`, `str | None` or `tuple[str, float | int]`; or they can be more complex types using utility classed like [`TypedDict`](https://peps.python.org/pep-0589/abstract).\n\n[Read more about type hints in Python.](https://realpython.com/lessons/type-hinting/)\n\n\n
\n\nWhat do I need to add hints to?\n\nYou do not need to add type hints to every part of your code. For the documentation to work correctly, you will need to add type hints to the following component methods:\n\n- `__init__` parameters should be typed.\n- `postprocess` parameters and return value should be typed.\n- `preprocess` parameters and return value should be typed.\n\nIf you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you ma", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "be typed.\n- `preprocess` parameters and return value should be typed.\n\nIf you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you make.\n\n`__init__`\n\nHere, you only need to type the parameters. If you have cloned a template with `gradio` cc create`, these should already be in place. You will only need to add new hints for anything you have added or changed:\n\n```py\ndef __init__(\n self,\n value: str | None = None,\n *,\n sources: Literal[\"upload\", \"microphone\"] = \"upload,\n every: Timer | float | None = None,\n ...\n):\n ...\n```\n\n`preprocess` and `postprocess`\n\nThe `preprocess` and `postprocess` methods determine the value passed to the user function and the value that needs to be returned.\n\nEven if the design of your component is primarily as an input or an output, it is worth adding type hints to both the input parameters and the return values because Gradio has no way of limiting how components can be used.\n\nIn this case, we specifically care about:\n\n- The return type of `preprocess`.\n- The input type of `postprocess`.\n\n```py\ndef preprocess(\n self, payload: FileData | None input is optional\n) -> tuple[int, str] | str | None:\n\nuser function input is the preprocess return \u25b2\nuser function output is the postprocess input \u25bc\n\ndef postprocess(\n self, value: tuple[int, str] | None\n) -> FileData | bytes | None: return is optional\n ...\n```\n\nDocstrings\n\nDocstrings are also used extensively to extract more meaningful, human-readable descriptions of certain parts of the API.\n\n
\n What are docstrings?\n\n\nIf you need to become more familiar with docstrings in Python, they are a way to annotate parts of your code with human-readable decisions and explanations. They offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement i", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement is where they appear. Docstrings should be \"a string literal that occurs as the first statement in a module, function, class, or method definition\".\n\n[Read more about Python docstrings.](https://peps.python.org/pep-0257/what-is-a-docstring)\n\n
\n\nWhile docstrings don't have any syntax requirements, we need a particular structure for documentation purposes.\n\nAs with type hint, the specific information we care about is as follows:\n\n- `__init__` parameter docstrings.\n- `preprocess` return docstrings.\n- `postprocess` input parameter docstrings.\n\nEverything else is optional.\n\nDocstrings should always take this format to be picked up by the documentation generator:\n\nClasses\n\n```py\n\"\"\"\nA description of the class.\n\nThis can span multiple lines and can _contain_ *markdown*.\n\"\"\"\n```\n\nMethods and functions \n\nMarkdown in these descriptions will not be converted into formatted text.\n\n```py\n\"\"\"\nParameters:\n param_one: A description for this parameter.\n param_two: A description for this parameter.\nReturns:\n A description for this return value.\n\"\"\"\n```\n\nEvents\n\nIn custom components, events are expressed as a list stored on the `events` field of the component class. While we do not need types for events, we _do_ need a human-readable description so users can understand the behaviour of the event.\n\nTo facilitate this, we must create the event in a specific way.\n\nThere are two ways to add events to a custom component.\n\nBuilt-in events\n\nGradio comes with a variety of built-in events that may be enough for your component. If you are using built-in events, you do not need to do anything as they already have descriptions we can extract:\n\n```py\nfrom gradio.events import Events\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n Events.up", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "do not need to do anything as they already have descriptions we can extract:\n\n```py\nfrom gradio.events import Events\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n Events.upload,\n ]\n```\n\nCustom events\n\nYou can define a custom event if the built-in events are unsuitable for your use case. This is a straightforward process, but you must create the event in this way for docstrings to work correctly:\n\n```py\nfrom gradio.events import Events, EventListener\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n EventListener(\n \"bingbong\",\n doc=\"This listener is triggered when the user does a bingbong.\"\n )\n ]\n```\n\nDemo\n\nThe `demo/app.py`, often used for developing the component, generates the live demo and code snippet. The only strict rule here is that the `demo.launch()` command must be contained with a `__name__ == \"__main__\"` conditional as below:\n\n```py\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThe documentation generator will scan for such a clause and error if absent. If you are _not_ launching the demo inside the `demo/app.py`, then you can pass `--suppress-demo-check` to turn off this check.\n\nDemo recommendations\n\nAlthough there are no additional rules, there are some best practices you should bear in mind to get the best experience from the documentation generator.\n\nThese are only guidelines, and every situation is unique, but they are sound principles to remember.\n\nKeep the demo compact\n\nCompact demos look better and make it easier for users to understand what the demo does. Try to remove as many extraneous UI elements as possible to focus the users' attention on the core use case. \n\nSometimes, it might make sense to have a `demo/app.py` just for the docs and an additional, more complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description.\n\n", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "ore complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description.\n\nKeep the code concise\n\nThe 'getting started' snippet utilises the demo code, which should be as short as possible to keep users engaged and avoid confusion.\n\nIt isn't the job of the sample snippet to demonstrate the whole API; this snippet should be the shortest path to success for a new user. It should be easy to type or copy-paste and easy to understand. Explanatory comments should be brief and to the point.\n\nAvoid external dependencies\n\nAs mentioned above, users should be able to copy-paste a snippet and have a fully working app. Try to avoid third-party library dependencies to facilitate this.\n\nYou should carefully consider any examples; avoiding examples that require additional files or that make assumptions about the environment is generally a good idea.\n\nEnsure the `demo` directory is self-contained\n\nOnly the `demo` directory will be uploaded to Hugging Face spaces in certain instances, as the component will be installed via PyPi if possible. It is essential that this directory is self-contained and any files needed for the correct running of the demo are present.\n\nAdditional URLs\n\nThe documentation generator will generate a few buttons, providing helpful information and links to users. They are obtained automatically in some cases, but some need to be explicitly included in the `pyproject.yaml`. \n\n- PyPi Version and link - This is generated automatically.\n- GitHub Repository - This is populated via the `pyproject.toml`'s `project.urls.repository`.\n- Hugging Face Space - This is populated via the `pyproject.toml`'s `project.urls.space`.\n\nAn example `pyproject.toml` urls section might look like this:\n\n```toml\n[project.urls]\nrepository = \"https://github.com/user/repo-name\"\nspace = \"https://huggingface.co/spaces/user/space-name\"\n```", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "pyproject.toml` urls section might look like this:\n\n```toml\n[project.urls]\nrepository = \"https://github.com/user/repo-name\"\nspace = \"https://huggingface.co/spaces/user/space-name\"\n```", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "Every component in Gradio comes in a `static` variant, and most come in an `interactive` version as well.\nThe `static` version is used when a component is displaying a value, and the user can **NOT** change that value by interacting with it. \nThe `interactive` version is used when the user is able to change the value by interacting with the Gradio UI.\n\nLet's see some examples:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Textbox(value=\"Hello\", interactive=True)\n gr.Textbox(value=\"Hello\", interactive=False)\n\ndemo.launch()\n\n```\nThis will display two textboxes.\nThe only difference: you'll be able to edit the value of the Gradio component on top, and you won't be able to edit the variant on the bottom (i.e. the textbox will be disabled).\n\nPerhaps a more interesting example is with the `Image` component:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Image(interactive=True)\n gr.Image(interactive=False)\n\ndemo.launch()\n```\n\nThe interactive version of the component is much more complex -- you can upload images or snap a picture from your webcam -- while the static version can only be used to display images.\n\nNot every component has a distinct interactive version. For example, the `gr.AnnotatedImage` only appears as a static version since there's no way to interactively change the value of the annotations or the image.\n\nWhat you need to remember\n\n* Gradio will use the interactive version (if available) of a component if that component is used as the **input** to any event; otherwise, the static version will be used.\n\n* When you design custom components, you **must** accept the boolean interactive keyword in the constructor of your Python class. In the frontend, you **may** accept the `interactive` property, a `bool` which represents whether the component should be static or interactive. If you do not use this property in the frontend, the component will appear the same in interactive or static mode.\n\n", "heading1": "Interactive vs Static", "source_page_url": "https://gradio.app/guides/key-component-concepts", "source_page_title": "Custom Components - Key Component Concepts Guide"}, {"text": "The most important attribute of a component is its `value`.\nEvery component has a `value`.\nThe value that is typically set by the user in the frontend (if the component is interactive) or displayed to the user (if it is static). \nIt is also this value that is sent to the backend function when a user triggers an event, or returned by the user's function e.g. at the end of a prediction.\n\nSo this value is passed around quite a bit, but sometimes the format of the value needs to change between the frontend and backend. \nTake a look at this example:\n\n```python\nimport numpy as np\nimport gradio as gr\n\ndef sepia(input_img):\n sepia_filter = np.array([\n [0.393, 0.769, 0.189], \n [0.349, 0.686, 0.168], \n [0.272, 0.534, 0.131]\n ])\n sepia_img = input_img.dot(sepia_filter.T)\n sepia_img /= sepia_img.max()\n return sepia_img\n\ndemo = gr.Interface(sepia, gr.Image(width=200, height=200), \"image\")\ndemo.launch()\n```\n\nThis will create a Gradio app which has an `Image` component as the input and the output. \nIn the frontend, the Image component will actually **upload** the file to the server and send the **filepath** but this is converted to a `numpy` array before it is sent to a user's function. \nConversely, when the user returns a `numpy` array from their function, the numpy array is converted to a file so that it can be sent to the frontend and displayed by the `Image` component.\n\nTip: By default, the `Image` component sends numpy arrays to the python function because it is a common choice for machine learning engineers, though the Image component also supports other formats using the `type` parameter. Read the `Image` docs [here](https://www.gradio.app/docs/image) to learn more.\n\nEach component does two conversions:\n\n1. `preprocess`: Converts the `value` from the format sent by the frontend to the format expected by the python function. This usually involves going from a web-friendly **JSON** structure to a **python-native** data structure, like a `n", "heading1": "The value and how it is preprocessed/postprocessed", "source_page_url": "https://gradio.app/guides/key-component-concepts", "source_page_title": "Custom Components - Key Component Concepts Guide"}, {"text": " from the format sent by the frontend to the format expected by the python function. This usually involves going from a web-friendly **JSON** structure to a **python-native** data structure, like a `numpy` array or `PIL` image. The `Audio`, `Image` components are good examples of `preprocess` methods.\n\n2. `postprocess`: Converts the value returned by the python function to the format expected by the frontend. This usually involves going from a **python-native** data-structure, like a `PIL` image to a **JSON** structure.\n\nWhat you need to remember\n\n* Every component must implement `preprocess` and `postprocess` methods. In the rare event that no conversion needs to happen, simply return the value as-is. `Textbox` and `Number` are examples of this. \n\n* As a component author, **YOU** control the format of the data displayed in the frontend as well as the format of the data someone using your component will receive. Think of an ergonomic data-structure a **python** developer will find intuitive, and control the conversion from a **Web-friendly JSON** data structure (and vice-versa) with `preprocess` and `postprocess.`\n\n", "heading1": "The value and how it is preprocessed/postprocessed", "source_page_url": "https://gradio.app/guides/key-component-concepts", "source_page_title": "Custom Components - Key Component Concepts Guide"}, {"text": "Gradio apps support providing example inputs -- and these are very useful in helping users get started using your Gradio app. \nIn `gr.Interface`, you can provide examples using the `examples` keyword, and in `Blocks`, you can provide examples using the special `gr.Examples` component.\n\nAt the bottom of this screenshot, we show a miniature example image of a cheetah that, when clicked, will populate the same image in the input Image component:\n\n![img](https://user-images.githubusercontent.com/1778297/277548211-a3cb2133-2ffc-4cdf-9a83-3e8363b57ea6.png)\n\n\nTo enable the example view, you must have the following two files in the top of the `frontend` directory:\n\n* `Example.svelte`: this corresponds to the \"example version\" of your component\n* `Index.svelte`: this corresponds to the \"regular version\"\n\nIn the backend, you typically don't need to do anything. The user-provided example `value` is processed using the same `.postprocess()` method described earlier. If you'd like to do process the data differently (for example, if the `.postprocess()` method is computationally expensive), then you can write your own `.process_example()` method for your custom component, which will be used instead. \n\nThe `Example.svelte` file and `process_example()` method will be covered in greater depth in the dedicated [frontend](./frontend) and [backend](./backend) guides respectively.\n\nWhat you need to remember\n\n* If you expect your component to be used as input, it is important to define an \"Example\" view.\n* If you don't, Gradio will use a default one but it won't be as informative as it can be!\n\n", "heading1": "The \"Example Version\" of a Component", "source_page_url": "https://gradio.app/guides/key-component-concepts", "source_page_title": "Custom Components - Key Component Concepts Guide"}, {"text": "Now that you know the most important pieces to remember about Gradio components, you can start to design and build your own!\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/key-component-concepts", "source_page_title": "Custom Components - Key Component Concepts Guide"}, {"text": "You will need to have:\n\n* Python 3.10+ (install here)\n* pip 21.3+ (`python -m pip install --upgrade pip`)\n* Node.js 20+ (install here)\n* npm 9+ (install here)\n* Gradio 5+ (`pip install --upgrade gradio`)\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/custom-components-in-five-minutes", "source_page_title": "Custom Components - Custom Components In Five Minutes Guide"}, {"text": "The Custom Components workflow consists of 4 steps: create, dev, build, and publish.\n\n1. create: creates a template for you to start developing a custom component.\n2. dev: launches a development server with a sample app & hot reloading allowing you to easily develop your custom component\n3. build: builds a python package containing to your custom component's Python and JavaScript code -- this makes things official!\n4. publish: uploads your package to [PyPi](https://pypi.org/) and/or a sample app to [HuggingFace Spaces](https://hf.co/spaces).\n\nEach of these steps is done via the Custom Component CLI. You can invoke it with `gradio cc` or `gradio component`\n\nTip: Run `gradio cc --help` to get a help menu of all available commands. There are some commands that are not covered in this guide. You can also append `--help` to any command name to bring up a help page for that command, e.g. `gradio cc create --help`.\n\n", "heading1": "The Workflow", "source_page_url": "https://gradio.app/guides/custom-components-in-five-minutes", "source_page_title": "Custom Components - Custom Components In Five Minutes Guide"}, {"text": "Bootstrap a new template by running the following in any working directory:\n\n```bash\ngradio cc create MyComponent --template SimpleTextbox\n```\n\nInstead of `MyComponent`, give your component any name.\n\nInstead of `SimpleTextbox`, you can use any Gradio component as a template. `SimpleTextbox` is actually a special component that a stripped-down version of the `Textbox` component that makes it particularly useful when creating your first custom component.\nSome other components that are good if you are starting out: `SimpleDropdown`, `SimpleImage`, or `File`.\n\nTip: Run `gradio cc show` to get a list of available component templates.\n\nThe `create` command will:\n\n1. Create a directory with your component's name in lowercase with the following structure:\n```directory\n- backend/ <- The python code for your custom component\n- frontend/ <- The javascript code for your custom component\n- demo/ <- A sample app using your custom component. Modify this to develop your component!\n- pyproject.toml <- Used to build the package and specify package metadata.\n```\n\n2. Install the component in development mode\n\nEach of the directories will have the code you need to get started developing!\n\n", "heading1": "1. create", "source_page_url": "https://gradio.app/guides/custom-components-in-five-minutes", "source_page_title": "Custom Components - Custom Components In Five Minutes Guide"}, {"text": "Once you have created your new component, you can start a development server by `entering the directory` and running\n\n```bash\ngradio cc dev\n```\n\nYou'll see several lines that are printed to the console.\nThe most important one is the one that says:\n\n> Frontend Server (Go here): http://localhost:7861/\n\nThe port number might be different for you.\nClick on that link to launch the demo app in hot reload mode.\nNow, you can start making changes to the backend and frontend you'll see the results reflected live in the sample app!\nWe'll go through a real example in a later guide.\n\nTip: You don't have to run dev mode from your custom component directory. The first argument to `dev` mode is the path to the directory. By default it uses the current directory.\n\n", "heading1": "2. dev", "source_page_url": "https://gradio.app/guides/custom-components-in-five-minutes", "source_page_title": "Custom Components - Custom Components In Five Minutes Guide"}, {"text": "Once you are satisfied with your custom component's implementation, you can `build` it to use it outside of the development server.\n\nFrom your component directory, run:\n\n```bash\ngradio cc build\n```\n\nThis will create a `tar.gz` and `.whl` file in a `dist/` subdirectory.\nIf you or anyone installs that `.whl` file (`pip install `) they will be able to use your custom component in any gradio app!\n\nThe `build` command will also generate documentation for your custom component. This takes the form of an interactive space and a static `README.md`. You can disable this by passing `--no-generate-docs`. You can read more about the documentation generator in [the dedicated guide](https://gradio.app/guides/documenting-custom-components).\n\n", "heading1": "3. build", "source_page_url": "https://gradio.app/guides/custom-components-in-five-minutes", "source_page_title": "Custom Components - Custom Components In Five Minutes Guide"}, {"text": "Right now, your package is only available on a `.whl` file on your computer.\nYou can share that file with the world with the `publish` command!\n\nSimply run the following command from your component directory:\n\n```bash\ngradio cc publish\n```\n\nThis will guide you through the following process:\n\n1. Upload your distribution files to PyPi. This makes it easier to upload the demo to Hugging Face spaces. Otherwise your package must be at a publicly available url. If you decide to upload to PyPi, you will need a PyPI username and password. You can get one [here](https://pypi.org/account/register/).\n2. Upload a demo of your component to hugging face spaces. This is also optional.\n\n\nHere is an example of what publishing looks like:\n\n\n\n\n", "heading1": "4. publish", "source_page_url": "https://gradio.app/guides/custom-components-in-five-minutes", "source_page_title": "Custom Components - Custom Components In Five Minutes Guide"}, {"text": "Now that you know the high-level workflow of creating custom components, you can go in depth in the next guides!\nAfter reading the guides, check out this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub so you can learn from other's code.\n\nTip: If you want to start off from someone else's custom component see this [guide](./frequently-asked-questionsdo-i-always-need-to-start-my-component-from-scratch).\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/custom-components-in-five-minutes", "source_page_title": "Custom Components - Custom Components In Five Minutes Guide"}, {"text": "Before using Custom Components, make sure you have Python 3.10+, Node.js v18+, npm 9+, and Gradio 4.0+ (preferably Gradio 5.0+) installed.\n\n", "heading1": "What do I need to install before using Custom Components?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "Custom components built with Gradio 5.0 should be compatible with Gradio 4.0. If you built your custom component in Gradio 4.0 you will have to rebuild your component to be compatible with Gradio 5.0. Simply follow these steps:\n1. Update the `@gradio/preview` package. `cd` into the `frontend` directory and run `npm update`.\n2. Modify the `dependencies` key in `pyproject.toml` to pin the maximum allowed Gradio version at version 5, e.g. `dependencies = [\"gradio>=4.0,<6.0\"]`.\n3. Run the build and publish commands\n\n", "heading1": "Are custom components compatible between Gradio 4.0 and 5.0?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "Run `gradio cc show` to see the list of built-in templates.\nYou can also start off from other's custom components!\nSimply `git clone` their repository and make your modifications.\n\n", "heading1": "What templates can I use to create my custom component?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "When you run `gradio cc dev`, a development server will load and run a Gradio app of your choosing.\nThis is like when you run `python .py`, however the `gradio` command will hot reload so you can instantly see your changes. \n\n", "heading1": "What is the development server?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "**1. Check your terminal and browser console**\n\nMake sure there are no syntax errors or other obvious problems in your code. Exceptions triggered from python will be displayed in the terminal. Exceptions from javascript will be displayed in the browser console and/or the terminal.\n\n**2. Are you developing on Windows?**\n\nChrome on Windows will block the local compiled svelte files for security reasons. We recommend developing your custom component in the windows subsystem for linux (WSL) while the team looks at this issue.\n\n**3. Inspect the window.__GRADIO_CC__ variable**\n\nIn the browser console, print the `window.__GRADIO__CC` variable (just type it into the console). If it is an empty object, that means\nthat the CLI could not find your custom component source code. Typically, this happens when the custom component is installed in a different virtual environment than the one used to run the dev command. Please use the `--python-path` and `gradio-path` CLI arguments to specify the path of the python and gradio executables for the environment your component is installed in. For example, if you are using a virtualenv located at `/Users/mary/venv`, pass in `/Users/mary/bin/python` and `/Users/mary/bin/gradio` respectively.\n\nIf the `window.__GRADIO__CC` variable is not empty (see below for an example), then the dev server should be working correctly. \n\n![](https://gradio-builds.s3.amazonaws.com/demo-files/gradio_CC_DEV.png)\n\n**4. Make sure you are using a virtual environment**\nIt is highly recommended you use a virtual environment to prevent conflicts with other python dependencies installed in your system.\n\n\n", "heading1": "The development server didn't work for me", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "No! You can start off from an existing gradio component as a template, see the [five minute guide](./custom-components-in-five-minutes).\nYou can also start from an existing custom component if you'd like to tweak it further. Once you find the source code of a custom component you like, clone the code to your computer and run `gradio cc install`. Then you can run the development server to make changes.If you run into any issues, contact the author of the component by opening an issue in their repository. The [gallery](https://www.gradio.app/custom-components/gallery) is a good place to look for published components. For example, to start from the [PDF component](https://www.gradio.app/custom-components/gallery?id=freddyaboulton%2Fgradio_pdf), clone the space with `git clone https://huggingface.co/spaces/freddyaboulton/gradio_pdf`, `cd` into the `src` directory, and run `gradio cc install`.\n\n\n", "heading1": "Do I always need to start my component from scratch?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "You can develop and build your custom component without hosting or connecting to HuggingFace.\nIf you would like to share your component with the gradio community, it is recommended to publish your package to PyPi and host a demo on HuggingFace so that anyone can install it or try it out.\n\n", "heading1": "Do I need to host my custom component on HuggingFace Spaces?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "You must implement the `preprocess`, `postprocess`, `example_payload`, and `example_value` methods. If your component does not use a data model, you must also define the `api_info`, `flag`, and `read_from_flag` methods. Read more in the [backend guide](./backend).\n\n", "heading1": "What methods are mandatory for implementing a custom component in Gradio?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "A `data_model` defines the expected data format for your component, simplifying the component development process and self-documenting your code. It streamlines API usage and example caching.\n\n", "heading1": "What is the purpose of a `data_model` in Gradio custom components?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "Utilizing `FileData` is crucial for components that expect file uploads. It ensures secure file handling, automatic caching, and streamlined client library functionality.\n\n", "heading1": "Why is it important to use `FileData` for components dealing with file uploads?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "You can define event triggers in the `EVENTS` class attribute by listing the desired event names, which automatically adds corresponding methods to your component.\n\n", "heading1": "How can I add event triggers to my custom Gradio component?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "Yes, it is possible to create custom components without a `data_model`, but you are going to have to manually implement `api_info`, `flag`, and `read_from_flag` methods.\n\n", "heading1": "Can I implement a custom Gradio component without defining a `data_model`?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "We have prepared this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub that you can use to get started!\n\n", "heading1": "Are there sample custom components I can learn from?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "We're working on creating a gallery to make it really easy to discover new custom components.\nIn the meantime, you can search for HuggingFace Spaces that are tagged as a `gradio-custom-component` [here](https://huggingface.co/search/full-text?q=gradio-custom-component&type=space)", "heading1": "How can I find custom components created by the Gradio community?", "source_page_url": "https://gradio.app/guides/frequently-asked-questions", "source_page_title": "Custom Components - Frequently Asked Questions Guide"}, {"text": "For this demo we will be tweaking the existing Gradio `Chatbot` component to display text and media files in the same message.\nLet's create a new custom component directory by templating off of the `Chatbot` component source code.\n\n```bash\ngradio cc create MultimodalChatbot --template Chatbot\n```\n\nAnd we're ready to go!\n\nTip: Make sure to modify the `Author` key in the `pyproject.toml` file.\n\n", "heading1": "Part 1 - Creating our project", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "Open up the `multimodalchatbot.py` file in your favorite code editor and let's get started modifying the backend of our component.\n\nThe first thing we will do is create the `data_model` of our component.\nThe `data_model` is the data format that your python component will receive and send to the javascript client running the UI.\nYou can read more about the `data_model` in the [backend guide](./backend).\n\nFor our component, each chatbot message will consist of two keys: a `text` key that displays the text message and an optional list of media files that can be displayed underneath the text.\n\nImport the `FileData` and `GradioModel` classes from `gradio.data_classes` and modify the existing `ChatbotData` class to look like the following:\n\n```python\nclass FileMessage(GradioModel):\n file: FileData\n alt_text: Optional[str] = None\n\n\nclass MultimodalMessage(GradioModel):\n text: Optional[str] = None\n files: Optional[List[FileMessage]] = None\n\n\nclass ChatbotData(GradioRootModel):\n root: List[Tuple[Optional[MultimodalMessage], Optional[MultimodalMessage]]]\n\n\nclass MultimodalChatbot(Component):\n ...\n data_model = ChatbotData\n```\n\n\nTip: The `data_model`s are implemented using `Pydantic V2`. Read the documentation [here](https://docs.pydantic.dev/latest/).\n\nWe've done the hardest part already!\n\n", "heading1": "Part 2a - The backend data_model", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "For the `preprocess` method, we will keep it simple and pass a list of `MultimodalMessage`s to the python functions that use this component as input. \nThis will let users of our component access the chatbot data with `.text` and `.files` attributes.\nThis is a design choice that you can modify in your implementation!\nWe can return the list of messages with the `root` property of the `ChatbotData` like so:\n\n```python\ndef preprocess(\n self,\n payload: ChatbotData | None,\n) -> List[MultimodalMessage] | None:\n if payload is None:\n return payload\n return payload.root\n```\n\n\nTip: Learn about the reasoning behind the `preprocess` and `postprocess` methods in the [key concepts guide](./key-component-concepts)\n\nIn the `postprocess` method we will coerce each message returned by the python function to be a `MultimodalMessage` class. \nWe will also clean up any indentation in the `text` field so that it can be properly displayed as markdown in the frontend.\n\nWe can leave the `postprocess` method as is and modify the `_postprocess_chat_messages`\n\n```python\ndef _postprocess_chat_messages(\n self, chat_message: MultimodalMessage | dict | None\n) -> MultimodalMessage | None:\n if chat_message is None:\n return None\n if isinstance(chat_message, dict):\n chat_message = MultimodalMessage(**chat_message)\n chat_message.text = inspect.cleandoc(chat_message.text or \"\")\n for file_ in chat_message.files:\n file_.file.mime_type = client_utils.get_mimetype(file_.file.path)\n return chat_message\n```\n\nBefore we wrap up with the backend code, let's modify the `example_value` and `example_payload` method to return a valid dictionary representation of the `ChatbotData`:\n\n```python\ndef example_value(self) -> Any:\n return [[{\"text\": \"Hello!\", \"files\": []}, None]]\n\ndef example_payload(self) -> Any:\n return [[{\"text\": \"Hello!\", \"files\": []}, None]]\n```\n\nCongrats - the backend is complete!\n\n", "heading1": "Part 2b - The pre and postprocess methods", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "The frontend for the `Chatbot` component is divided into two parts - the `Index.svelte` file and the `shared/Chatbot.svelte` file.\nThe `Index.svelte` file applies some processing to the data received from the server and then delegates the rendering of the conversation to the `shared/Chatbot.svelte` file.\nFirst we will modify the `Index.svelte` file to apply processing to the new data type the backend will return.\n\nLet's begin by porting our custom types from our python `data_model` to typescript.\nOpen `frontend/shared/utils.ts` and add the following type definitions at the top of the file:\n\n```ts\nexport type FileMessage = {\n\tfile: FileData;\n\talt_text?: string;\n};\n\n\nexport type MultimodalMessage = {\n\ttext: string;\n\tfiles?: FileMessage[];\n}\n```\n\nNow let's import them in `Index.svelte` and modify the type annotations for `value` and `_value`.\n\n```ts\nimport type { FileMessage, MultimodalMessage } from \"./shared/utils\";\n\nexport let value: [\n MultimodalMessage | null,\n MultimodalMessage | null\n][] = [];\n\nlet _value: [\n MultimodalMessage | null,\n MultimodalMessage | null\n][];\n```\n\nWe need to normalize each message to make sure each file has a proper URL to fetch its contents from.\nWe also need to format any embedded file links in the `text` key.\nLet's add a `process_message` utility function and apply it whenever the `value` changes.\n\n```ts\nfunction process_message(msg: MultimodalMessage | null): MultimodalMessage | null {\n if (msg === null) {\n return msg;\n }\n msg.text = redirect_src_url(msg.text);\n msg.files = msg.files.map(normalize_messages);\n return msg;\n}\n\n$: _value = value\n ? value.map(([user_msg, bot_msg]) => [\n process_message(user_msg),\n process_message(bot_msg)\n ])\n : [];\n```\n\n", "heading1": "Part 3a - The Index.svelte file", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "Let's begin similarly to the `Index.svelte` file and let's first modify the type annotations.\nImport `Mulimodal` message at the top of the `\n\n\n\n\n\t{if loading_status}\n\t\t\n\t{/if}\n

{value}

\n\n```\n\n", "heading1": "The Index.svelte file", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "The `Example.svelte` file should expose the following props:\n\n```typescript\n export let value: string;\n export let type: \"gallery\" | \"table\";\n export let selected = false;\n export let index: number;\n```\n\n* `value`: The example value that should be displayed.\n\n* `type`: This is a variable that can be either `\"gallery\"` or `\"table\"` depending on how the examples are displayed. The `\"gallery\"` form is used when the examples correspond to a single input component, while the `\"table\"` form is used when a user has multiple input components, and the examples need to populate all of them. \n\n* `selected`: You can also adjust how the examples are displayed if a user \"selects\" a particular example by using the selected variable.\n\n* `index`: The current index of the selected value.\n\n* Any additional props your \"non-example\" component takes!\n\nThis is the `Example.svelte` file for the code `Radio` component:\n\n```svelte\n\n\n\n\t{value}\n
\n\n\n```\n\n", "heading1": "The Example.svelte file", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "If your component deals with files, these files **should** be uploaded to the backend server. \nThe `@gradio/client` npm package provides the `upload` and `prepare_files` utility functions to help you do this.\n\nThe `prepare_files` function will convert the browser's `File` datatype to gradio's internal `FileData` type.\nYou should use the `FileData` data in your component to keep track of uploaded files.\n\nThe `upload` function will upload an array of `FileData` values to the server.\n\nHere's an example of loading files from an `` element when its value changes.\n\n\n```svelte\n\n\n\n```\n\nThe component exposes a prop named `root`. \nThis is passed down by the parent gradio app and it represents the base url that the files will be uploaded to and fetched from.\n\nFor WASM support, you should get the upload function from the `Context` and pass that as the third parameter of the `upload` function.\n\n```typescript\n\n```\n\n", "heading1": "Handling Files", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "Most of Gradio's frontend components are published on [npm](https://www.npmjs.com/), the javascript package repository.\nThis means that you can use them to save yourself time while incorporating common patterns in your component, like uploading files.\nFor example, the `@gradio/upload` package has `Upload` and `ModifyUpload` components for properly uploading files to the Gradio server. \nHere is how you can use them to create a user interface to upload and display PDF files.\n\n```svelte\n\n\n\n{if value === null && interactive}\n \n \n \n{:else if value !== null}\n {if interactive}\n \n {/if}\n \n{:else}\n \t\n{/if}\n```\n\nYou can also combine existing Gradio components to create entirely unique experiences.\nLike rendering a gallery of chatbot conversations. \nThe possibilities are endless, please read the documentation on our javascript packages [here](https://gradio.app/main/docs/js).\nWe'll be adding more packages and documentation over the coming weeks!\n\n", "heading1": "Leveraging Existing Gradio Components", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "You can explore our component library via Storybook. You'll be able to interact with our components and see them in their various states.\n\nFor those interested in design customization, we provide the CSS variables consisting of our color palette, radii, spacing, and the icons we use - so you can easily match up your custom component with the style of our core components. This Storybook will be regularly updated with any new additions or changes.\n\n[Storybook Link](https://gradio.app/main/docs/js/storybook)\n\n", "heading1": "Matching Gradio Core's Design System", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "If you want to make use of the vast vite ecosystem, you can use the `gradio.config.js` file to configure your component's build process. This allows you to make use of tools like tailwindcss, mdsvex, and more.\n\nCurrently, it is possible to configure the following:\n\nVite options:\n- `plugins`: A list of vite plugins to use.\n\nSvelte options:\n- `preprocess`: A list of svelte preprocessors to use.\n- `extensions`: A list of file extensions to compile to `.svelte` files.\n- `build.target`: The target to build for, this may be necessary to support newer javascript features. See the [esbuild docs](https://esbuild.github.io/api/target) for more information.\n\nThe `gradio.config.js` file should be placed in the root of your component's `frontend` directory. A default config file is created for you when you create a new component. But you can also create your own config file, if one doesn't exist, and use it to customize your component's build process.\n\nExample for a Vite plugin\n\nCustom components can use Vite plugins to customize the build process. Check out the [Vite Docs](https://vitejs.dev/guide/using-plugins.html) for more information. \n\nHere we configure [TailwindCSS](https://tailwindcss.com), a utility-first CSS framework. Setup is easiest using the version 4 prerelease. \n\n```\nnpm install tailwindcss@next @tailwindcss/vite@next\n```\n\nIn `gradio.config.js`:\n\n```typescript\nimport tailwindcss from \"@tailwindcss/vite\";\nexport default {\n plugins: [tailwindcss()]\n};\n```\n\nThen create a `style.css` file with the following content:\n\n```css\n@import \"tailwindcss\";\n```\n\nImport this file into `Index.svelte`. Note, that you need to import the css file containing `@import` and cannot just use a `\n```\n\nNow import `PdfUploadText.svelte` in your `\n\n\n\t\n\n\n\n```\n\n\nTip: Exercise for the reader - reduce the code duplication between `Index.svelte` and `Example.svelte` \ud83d\ude0a\n\n\nYou will not be able to render examples until we make some changes to the backend code in the next step!\n\n", "heading1": "Step 8.5: The Example view", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "The backend changes needed are smaller.\nWe're almost done!\n\nWhat we're going to do is:\n* Add `change` and `upload` events to our component.\n* Add a `height` property to let users control the height of the PDF.\n* Set the `data_model` of our component to be `FileData`. This is so that Gradio can automatically cache and safely serve any files that are processed by our component.\n* Modify the `preprocess` method to return a string corresponding to the path of our uploaded PDF.\n* Modify the `postprocess` to turn a path to a PDF created in an event handler to a `FileData`.\n\nWhen all is said an done, your component's backend code should look like this:\n\n```python\nfrom __future__ import annotations\nfrom typing import Any, Callable, TYPE_CHECKING\n\nfrom gradio.components.base import Component\nfrom gradio.data_classes import FileData\nfrom gradio import processing_utils\nif TYPE_CHECKING:\n from gradio.components import Timer\n\nclass PDF(Component):\n\n EVENTS = [\"change\", \"upload\"]\n\n data_model = FileData\n\n def __init__(self, value: Any = None, *,\n height: int | None = None,\n label: str | I18nData | None = None,\n info: str | I18nData | None = None,\n show_label: bool | None = None,\n container: bool = True,\n scale: int | None = None,\n min_width: int | None = None,\n interactive: bool | None = None,\n visible: bool = True,\n elem_id: str | None = None,\n elem_classes: list[str] | str | None = None,\n render: bool = True,\n load_fn: Callable[..., Any] | None = None,\n every: Timer | float | None = None):\n super().__init__(value, label=label, info=info,\n show_label=show_label, container=container,\n scale=scale, min_width=min_width,\n interactive=interactive, visible=visible,\n ", "heading1": "Step 9: The backend", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": " show_label=show_label, container=container,\n scale=scale, min_width=min_width,\n interactive=interactive, visible=visible,\n elem_id=elem_id, elem_classes=elem_classes,\n render=render, load_fn=load_fn, every=every)\n self.height = height\n\n def preprocess(self, payload: FileData) -> str:\n return payload.path\n\n def postprocess(self, value: str | None) -> FileData:\n if not value:\n return None\n return FileData(path=value)\n\n def example_payload(self):\n return \"https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf\"\n\n def example_value(self):\n return \"https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf\"\n```\n\n", "heading1": "Step 9: The backend", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "To test our backend code, let's add a more complex demo that performs Document Question and Answering with huggingface transformers.\n\nIn our `demo` directory, create a `requirements.txt` file with the following packages\n\n```\ntorch\ntransformers\npdf2image\npytesseract\n```\n\n\nTip: Remember to install these yourself and restart the dev server! You may need to install extra non-python dependencies for `pdf2image`. See [here](https://pypi.org/project/pdf2image/). Feel free to write your own demo if you have trouble.\n\n\n```python\nimport gradio as gr\nfrom gradio_pdf import PDF\nfrom pdf2image import convert_from_path\nfrom transformers import pipeline\nfrom pathlib import Path\n\ndir_ = Path(__file__).parent\n\np = pipeline(\n \"document-question-answering\",\n model=\"impira/layoutlm-document-qa\",\n)\n\ndef qa(question: str, doc: str) -> str:\n img = convert_from_path(doc)[0]\n output = p(img, question)\n return sorted(output, key=lambda x: x[\"score\"], reverse=True)[0]['answer']\n\n\ndemo = gr.Interface(\n qa,\n [gr.Textbox(label=\"Question\"), PDF(label=\"Document\")],\n gr.Textbox(),\n)\n\ndemo.launch()\n```\n\nSee our demo in action below!\n\n\n\nFinally lets build our component with `gradio cc build` and publish it with the `gradio cc publish` command!\nThis will guide you through the process of uploading your component to [PyPi](https://pypi.org/) and [HuggingFace Spaces](https://huggingface.co/spaces).\n\n\nTip: You may need to add the following lines to the `Dockerfile` of your HuggingFace Space.\n\n```Dockerfile\nRUN mkdir -p /tmp/cache/\nRUN chmod a+rwx -R /tmp/cache/\nRUN apt-get update && apt-get install -y poppler-utils tesseract-ocr\n\nENV TRANSFORMERS_CACHE=/tmp/cache/\n```\n\n", "heading1": "Step 10: Add a demo and publish!", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "In order to use our new component in **any** gradio 4.0 app, simply install it with pip, e.g. `pip install gradio-pdf`. Then you can use it like the built-in `gr.File()` component (except that it will only accept and display PDF files).\n\nHere is a simple demo with the Blocks api:\n\n```python\nimport gradio as gr\nfrom gradio_pdf import PDF\n\nwith gr.Blocks() as demo:\n pdf = PDF(label=\"Upload a PDF\", interactive=True)\n name = gr.Textbox()\n pdf.upload(lambda f: f, pdf, name)\n\ndemo.launch()\n```\n\n\nI hope you enjoyed this tutorial!\nThe complete source code for our component is [here](https://huggingface.co/spaces/freddyaboulton/gradio_pdf/tree/main/src).\nPlease don't hesitate to reach out to the gradio community on the [HuggingFace Discord](https://discord.gg/hugging-face-879548962464493619) if you get stuck.\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "By default, all custom component packages are called `gradio_` where `component-name` is the name of the component's python class in lowercase.\n\nAs an example, let's walkthrough changing the name of a component from `gradio_mytextbox` to `supertextbox`. \n\n1. Modify the `name` in the `pyproject.toml` file. \n\n```bash\n[project]\nname = \"supertextbox\"\n```\n\n2. Change all occurrences of `gradio_` in `pyproject.toml` to ``\n\n```bash\n[tool.hatch.build]\nartifacts = [\"/backend/supertextbox/templates\", \"*.pyi\"]\n\n[tool.hatch.build.targets.wheel]\npackages = [\"/backend/supertextbox\"]\n```\n\n3. Rename the `gradio_` directory in `backend/` to ``\n\n```bash\nmv backend/gradio_mytextbox backend/supertextbox\n```\n\n\nTip: Remember to change the import statement in `demo/app.py`!\n\n", "heading1": "The Package Name", "source_page_url": "https://gradio.app/guides/configuration", "source_page_title": "Custom Components - Configuration Guide"}, {"text": "By default, only the custom component python class is a top level export. \nThis means that when users type `from gradio_ import ...`, the only class that will be available is the custom component class.\nTo add more classes as top level exports, modify the `__all__` property in `__init__.py`\n\n```python\nfrom .mytextbox import MyTextbox\nfrom .mytextbox import AdditionalClass, additional_function\n\n__all__ = ['MyTextbox', 'AdditionalClass', 'additional_function']\n```\n\n", "heading1": "Top Level Python Exports", "source_page_url": "https://gradio.app/guides/configuration", "source_page_title": "Custom Components - Configuration Guide"}, {"text": "You can add python dependencies by modifying the `dependencies` key in `pyproject.toml`\n\n```bash\ndependencies = [\"gradio\", \"numpy\", \"PIL\"]\n```\n\n\nTip: Remember to run `gradio cc install` when you add dependencies!\n\n", "heading1": "Python Dependencies", "source_page_url": "https://gradio.app/guides/configuration", "source_page_title": "Custom Components - Configuration Guide"}, {"text": "You can add JavaScript dependencies by modifying the `\"dependencies\"` key in `frontend/package.json`\n\n```json\n\"dependencies\": {\n \"@gradio/atoms\": \"0.2.0-beta.4\",\n \"@gradio/statustracker\": \"0.3.0-beta.6\",\n \"@gradio/utils\": \"0.2.0-beta.4\",\n \"your-npm-package\": \"\"\n}\n```\n\n", "heading1": "Javascript Dependencies", "source_page_url": "https://gradio.app/guides/configuration", "source_page_title": "Custom Components - Configuration Guide"}, {"text": "By default, the CLI will place the Python code in `backend` and the JavaScript code in `frontend`.\nIt is not recommended to change this structure since it makes it easy for a potential contributor to look at your source code and know where everything is.\nHowever, if you did want to this is what you would have to do:\n\n1. Place the Python code in the subdirectory of your choosing. Remember to modify the `[tool.hatch.build]` `[tool.hatch.build.targets.wheel]` in the `pyproject.toml` to match!\n\n2. Place the JavaScript code in the subdirectory of your choosing.\n\n2. Add the `FRONTEND_DIR` property on the component python class. It must be the relative path from the file where the class is defined to the location of the JavaScript directory.\n\n```python\nclass SuperTextbox(Component):\n FRONTEND_DIR = \"../../frontend/\"\n```\n\nThe JavaScript and Python directories must be under the same common directory!\n\n", "heading1": "Directory Structure", "source_page_url": "https://gradio.app/guides/configuration", "source_page_title": "Custom Components - Configuration Guide"}, {"text": "Sticking to the defaults will make it easy for others to understand and contribute to your custom component.\nAfter all, the beauty of open source is that anyone can help improve your code!\nBut if you ever need to deviate from the defaults, you know how!", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/configuration", "source_page_title": "Custom Components - Configuration Guide"}, {"text": "Start by installing all the dependencies. Add the following lines to a `requirements.txt` file and run `pip install -r requirements.txt`:\n\n```bash\nopencv-python\nfastrtc\nonnxruntime-gpu\n```\n\nWe'll use the ONNX runtime to speed up YOLOv10 inference. This guide assumes you have access to a GPU. If you don't, change `onnxruntime-gpu` to `onnxruntime`. Without a GPU, the model will run slower, resulting in a laggy demo.\n\nWe'll use OpenCV for image manipulation and the [WebRTC](https://webrtc.org/) protocol to achieve near-zero latency.\n\n**Note**: If you want to deploy this app on any cloud provider, you'll need to use your Hugging Face token to connect to a TURN server. Learn more in this [guide](https://fastrtc.org/deployment/). If you're not familiar with TURN servers, consult this [guide](https://www.twilio.com/docs/stun-turn/faqfaq-what-is-nat).\n\n", "heading1": "Setting up", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "We'll download the YOLOv10 model from the Hugging Face hub and instantiate a custom inference class to use this model. \n\nThe implementation of the inference class isn't covered in this guide, but you can find the source code [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/inference.pyL9) if you're interested. This implementation borrows heavily from this [github repository](https://github.com/ibaiGorordo/ONNX-YOLOv8-Object-Detection).\n\nWe're using the `yolov10-n` variant because it has the lowest latency. See the [Performance](https://github.com/THU-MIG/yolov10?tab=readme-ov-fileperformance) section of the README in the YOLOv10 GitHub repository.\n\n```python\nfrom huggingface_hub import hf_hub_download\nfrom inference import YOLOv10\n\nmodel_file = hf_hub_download(\n repo_id=\"onnx-community/yolov10n\", filename=\"onnx/model.onnx\"\n)\n\nmodel = YOLOv10(model_file)\n\ndef detection(image, conf_threshold=0.3):\n image = cv2.resize(image, (model.input_width, model.input_height))\n new_image = model.detect_objects(image, conf_threshold)\n return new_image\n```\n\nOur inference function, `detection`, accepts a numpy array from the webcam and a desired confidence threshold. Object detection models like YOLO identify many objects and assign a confidence score to each. The lower the confidence, the higher the chance of a false positive. We'll let users adjust the confidence threshold.\n\nThe function returns a numpy array corresponding to the same input image with all detected objects in bounding boxes.\n\n", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "The Gradio demo is straightforward, but we'll implement a few specific features:\n\n1. Use the `WebRTC` custom component to ensure input and output are sent to/from the server with WebRTC. \n2. The [WebRTC](https://github.com/freddyaboulton/gradio-webrtc) component will serve as both an input and output component.\n3. Utilize the `time_limit` parameter of the `stream` event. This parameter sets a processing time for each user's stream. In a multi-user setting, such as on Spaces, we'll stop processing the current user's stream after this period and move on to the next. \n\nWe'll also apply custom CSS to center the webcam and slider on the page.\n\n```python\nimport gradio as gr\nfrom fastrtc import WebRTC\n\ncss = \"\"\".my-group {max-width: 600px !important; max-height: 600px !important;}\n .my-column {display: flex !important; justify-content: center !important; align-items: center !important;}\"\"\"\n\nwith gr.Blocks(css=css) as demo:\n gr.HTML(\n \"\"\"\n

\n YOLOv10 Webcam Stream (Powered by WebRTC \u26a1\ufe0f)\n

\n \"\"\"\n )\n with gr.Column(elem_classes=[\"my-column\"]):\n with gr.Group(elem_classes=[\"my-group\"]):\n image = WebRTC(label=\"Stream\", rtc_configuration=rtc_configuration)\n conf_threshold = gr.Slider(\n label=\"Confidence Threshold\",\n minimum=0.0,\n maximum=1.0,\n step=0.05,\n value=0.30,\n )\n\n image.stream(\n fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=10\n )\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\n", "heading1": "The Gradio Demo", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "Our app is hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n). \n\nYou can use this app as a starting point to build real-time image applications with Gradio. Don't hesitate to open issues in the space or in the [FastRTC GitHub repo](https://github.com/gradio-app/fastrtc) if you have any questions or encounter problems.", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "Just like the classic Magic 8 Ball, a user should ask it a question orally and then wait for a response. Under the hood, we'll use Whisper to transcribe the audio and then use an LLM to generate a magic-8-ball-style answer. Finally, we'll use Parler TTS to read the response aloud.\n\n", "heading1": "The Overview", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "First let's define the UI and put placeholders for all the python logic.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as block:\n gr.HTML(\n f\"\"\"\n

Magic 8 Ball \ud83c\udfb1

\n

Ask a question and receive wisdom

\n

Powered by Parler-TTS\n \"\"\"\n )\n with gr.Group():\n with gr.Row():\n audio_out = gr.Audio(label=\"Spoken Answer\", streaming=True, autoplay=True)\n answer = gr.Textbox(label=\"Answer\")\n state = gr.State()\n with gr.Row():\n audio_in = gr.Audio(label=\"Speak your question\", sources=\"microphone\", type=\"filepath\")\n\n audio_in.stop_recording(generate_response, audio_in, [state, answer, audio_out])\\\n .then(fn=read_response, inputs=state, outputs=[answer, audio_out])\n\nblock.launch()\n```\n\nWe're placing the output Audio and Textbox components and the input Audio component in separate rows. In order to stream the audio from the server, we'll set `streaming=True` in the output Audio component. We'll also set `autoplay=True` so that the audio plays as soon as it's ready.\nWe'll be using the Audio input component's `stop_recording` event to trigger our application's logic when a user stops recording from their microphone.\n\nWe're separating the logic into two parts. First, `generate_response` will take the recorded audio, transcribe it and generate a response with an LLM. We're going to store the response in a `gr.State` variable that then gets passed to the `read_response` function that generates the audio.\n\nWe're doing this in two parts because only `read_response` will require a GPU. Our app will run on Hugging Faces [ZeroGPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU func", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "GPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU function as it will needlessly use our GPU quota.\n\n", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "As mentioned above, we'll use [Hugging Face's Inference API](https://huggingface.co/docs/huggingface_hub/guides/inference) to transcribe the audio and generate a response from an LLM. After instantiating the client, I use the `automatic_speech_recognition` method (this automatically uses Whisper running on Hugging Face's Inference Servers) to transcribe the audio. Then I pass the question to an LLM (Mistal-7B-Instruct) to generate a response. We are prompting the LLM to act like a magic 8 ball with the system message.\n\nOur `generate_response` function will also send empty updates to the output textbox and audio components (returning `None`). \nThis is because I want the Gradio progress tracker to be displayed over the components but I don't want to display the answer until the audio is ready.\n\n\n```python\nfrom huggingface_hub import InferenceClient\n\nclient = InferenceClient(token=os.getenv(\"HF_TOKEN\"))\n\ndef generate_response(audio):\n gr.Info(\"Transcribing Audio\", duration=5)\n question = client.automatic_speech_recognition(audio).text\n\n messages = [{\"role\": \"system\", \"content\": (\"You are a magic 8 ball.\"\n \"Someone will present to you a situation or question and your job \"\n \"is to answer with a cryptic adage or proverb such as \"\n \"'curiosity killed the cat' or 'The early bird gets the worm'.\"\n \"Keep your answers short and do not include the phrase 'Magic 8 Ball' in your response. If the question does not make sense or is off-topic, say 'Foolish questions get foolish answers.'\"\n \"For example, 'Magic 8 Ball, should I get a dog?', 'A dog is ready for you but are you ready for the dog?'\")},\n {\"role\": \"user\", \"content\": f\"Magic 8 Ball please answer this question - {question}\"}]\n \n response = client.chat_completion(messages,", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "for you but are you ready for the dog?'\")},\n {\"role\": \"user\", \"content\": f\"Magic 8 Ball please answer this question - {question}\"}]\n \n response = client.chat_completion(messages, max_tokens=64, seed=random.randint(1, 5000),\n model=\"mistralai/Mistral-7B-Instruct-v0.3\")\n\n response = response.choices[0].message.content.replace(\"Magic 8 Ball\", \"\").replace(\":\", \"\")\n return response, None, None\n```\n\n\nNow that we have our text response, we'll read it aloud with Parler TTS. The `read_response` function will be a python generator that yields the next chunk of audio as it's ready.\n\n\nWe'll be using the [Mini v0.1](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) for the feature extraction but the [Jenny fine tuned version](https://huggingface.co/parler-tts/parler-tts-mini-jenny-30H) for the voice. This is so that the voice is consistent across generations.\n\n\nStreaming audio with transformers requires a custom Streamer class. You can see the implementation [here](https://huggingface.co/spaces/gradio/magic-8-ball/blob/main/streamer.py). Additionally, we'll convert the output to bytes so that it can be streamed faster from the backend. \n\n\n```python\nfrom streamer import ParlerTTSStreamer\nfrom transformers import AutoTokenizer, AutoFeatureExtractor, set_seed\nimport numpy as np\nimport spaces\nimport torch\nfrom threading import Thread\n\n\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"mps\" if torch.backends.mps.is_available() else \"cpu\"\ntorch_dtype = torch.float16 if device != \"cpu\" else torch.float32\n\nrepo_id = \"parler-tts/parler_tts_mini_v0.1\"\n\njenny_repo_id = \"ylacombe/parler-tts-mini-jenny-30H\"\n\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\n jenny_repo_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True\n).to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nfeature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)\n\nsampling_rate = model.audio_encoder.config.sampling_rate\nf", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "sage=True\n).to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nfeature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)\n\nsampling_rate = model.audio_encoder.config.sampling_rate\nframe_rate = model.audio_encoder.config.frame_rate\n\n@spaces.GPU\ndef read_response(answer):\n\n play_steps_in_s = 2.0\n play_steps = int(frame_rate * play_steps_in_s)\n\n description = \"Jenny speaks at an average pace with a calm delivery in a very confined sounding environment with clear audio quality.\"\n description_tokens = tokenizer(description, return_tensors=\"pt\").to(device)\n\n streamer = ParlerTTSStreamer(model, device=device, play_steps=play_steps)\n prompt = tokenizer(answer, return_tensors=\"pt\").to(device)\n\n generation_kwargs = dict(\n input_ids=description_tokens.input_ids,\n prompt_input_ids=prompt.input_ids,\n streamer=streamer,\n do_sample=True,\n temperature=1.0,\n min_new_tokens=10,\n )\n\n set_seed(42)\n thread = Thread(target=model.generate, kwargs=generation_kwargs)\n thread.start()\n\n for new_audio in streamer:\n print(f\"Sample of length: {round(new_audio.shape[0] / sampling_rate, 2)} seconds\")\n yield answer, numpy_to_mp3(new_audio, sampling_rate=sampling_rate)\n```\n\n", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "You can see our final application [here](https://huggingface.co/spaces/gradio/magic-8-ball)!\n\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "Automatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions).\n\nUsing `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.\n\nThis tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak. \n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:\n\n- Transformers (for this, `pip install torch transformers torchaudio`)\n\nMake sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.\n\nHere's how to build a real time speech recognition (ASR) app:\n\n1. [Set up the Transformers ASR Model](1-set-up-the-transformers-asr-model)\n2. [Create a Full-Context ASR Demo with Transformers](2-create-a-full-context-asr-demo-with-transformers)\n3. [Create a Streaming ASR Demo with Transformers](3-create-a-streaming-asr-demo-with-transformers)\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, `whisper`.\n\nHere is the code to load `whisper` from Hugging Face `transformers`.\n\n```python\nfrom transformers import pipeline\n\np = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base.en\")\n```\n\nThat's it!\n\n", "heading1": "1. Set up the Transformers ASR Model", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.\n\nWe will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`.\n\n$code_asr\n$demo_asr\n\nThe `transcribe` function takes a single parameter, `audio`, which is a numpy array of the audio the user recorded. The `pipeline` object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text.\n\n", "heading1": "2. Create a Full-Context ASR Demo with Transformers", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "To make this a *streaming* demo, we need to make these changes:\n\n1. Set `streaming=True` in the `Audio` component\n2. Set `live=True` in the `Interface`\n3. Add a `state` to the interface to store the recorded audio of a user\n\nTip: You can also set `time_limit` and `stream_every` parameters in the interface. The `time_limit` caps the amount of time each user's stream can take. The default is 30 seconds so users won't be able to stream audio for more than 30 seconds. The `stream_every` parameter controls how frequently data is sent to your function. By default it is 0.5 seconds.\n\nTake a look below.\n\n$code_stream_asr\n\nNotice that we now have a state variable because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio spoken so far in the state. As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in the `stream` and the new chunk of audio as `new_chunk`. We return the new full audio to be stored back in its current state, and we also return the transcription. Here, we naively append the audio together and call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio is received. \n\n$demo_stream_asr\n\nNow the ASR model will run inference as you speak! \n", "heading1": "3. Create a Streaming ASR Demo with Transformers", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "The next generation of AI user interfaces is moving towards audio-native experiences. Users will be able to speak to chatbots and receive spoken responses in return. Several models have been built under this paradigm, including GPT-4o and [mini omni](https://github.com/gpt-omni/mini-omni).\n\nIn this guide, we'll walk you through building your own conversational chat application using mini omni as an example. You can see a demo of the finished app below:\n\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Our application will enable the following user experience:\n\n1. Users click a button to start recording their message\n2. The app detects when the user has finished speaking and stops recording\n3. The user's audio is passed to the omni model, which streams back a response\n4. After omni mini finishes speaking, the user's microphone is reactivated\n5. All previous spoken audio, from both the user and omni, is displayed in a chatbot component\n\nLet's dive into the implementation details.\n\n", "heading1": "Application Overview", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "We'll stream the user's audio from their microphone to the server and determine if the user has stopped speaking on each new chunk of audio.\n\nHere's our `process_audio` function:\n\n```python\nimport numpy as np\nfrom utils import determine_pause\n\ndef process_audio(audio: tuple, state: AppState):\n if state.stream is None:\n state.stream = audio[1]\n state.sampling_rate = audio[0]\n else:\n state.stream = np.concatenate((state.stream, audio[1]))\n\n pause_detected = determine_pause(state.stream, state.sampling_rate, state)\n state.pause_detected = pause_detected\n\n if state.pause_detected and state.started_talking:\n return gr.Audio(recording=False), state\n return None, state\n```\n\nThis function takes two inputs:\n1. The current audio chunk (a tuple of `(sampling_rate, numpy array of audio)`)\n2. The current application state\n\nWe'll use the following `AppState` dataclass to manage our application state:\n\n```python\nfrom dataclasses import dataclass\n\n@dataclass\nclass AppState:\n stream: np.ndarray | None = None\n sampling_rate: int = 0\n pause_detected: bool = False\n stopped: bool = False\n conversation: list = []\n```\n\nThe function concatenates new audio chunks to the existing stream and checks if the user has stopped speaking. If a pause is detected, it returns an update to stop recording. Otherwise, it returns `None` to indicate no changes.\n\nThe implementation of the `determine_pause` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/eb027808c7bfe5179b46d9352e3fa1813a45f7c3/app.pyL98).\n\n", "heading1": "Processing User Audio", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "After processing the user's audio, we need to generate and stream the chatbot's response. Here's our `response` function:\n\n```python\nimport io\nimport tempfile\nfrom pydub import AudioSegment\n\ndef response(state: AppState):\n if not state.pause_detected and not state.started_talking:\n return None, AppState()\n \n audio_buffer = io.BytesIO()\n\n segment = AudioSegment(\n state.stream.tobytes(),\n frame_rate=state.sampling_rate,\n sample_width=state.stream.dtype.itemsize,\n channels=(1 if len(state.stream.shape) == 1 else state.stream.shape[1]),\n )\n segment.export(audio_buffer, format=\"wav\")\n\n with tempfile.NamedTemporaryFile(suffix=\".wav\", delete=False) as f:\n f.write(audio_buffer.getvalue())\n \n state.conversation.append({\"role\": \"user\",\n \"content\": {\"path\": f.name,\n \"mime_type\": \"audio/wav\"}})\n \n output_buffer = b\"\"\n\n for mp3_bytes in speaking(audio_buffer.getvalue()):\n output_buffer += mp3_bytes\n yield mp3_bytes, state\n\n with tempfile.NamedTemporaryFile(suffix=\".mp3\", delete=False) as f:\n f.write(output_buffer)\n \n state.conversation.append({\"role\": \"assistant\",\n \"content\": {\"path\": f.name,\n \"mime_type\": \"audio/mp3\"}})\n yield None, AppState(conversation=state.conversation)\n```\n\nThis function:\n1. Converts the user's audio to a WAV file\n2. Adds the user's message to the conversation history\n3. Generates and streams the chatbot's response using the `speaking` function\n4. Saves the chatbot's response as an MP3 file\n5. Adds the chatbot's response to the conversation history\n\nNote: The implementation of the `speaking` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/main/app.pyL116).\n\n", "heading1": "Generating the Response", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Now let's put it all together using Gradio's Blocks API:\n\n```python\nimport gradio as gr\n\ndef start_recording_user(state: AppState):\n if not state.stopped:\n return gr.Audio(recording=True)\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n input_audio = gr.Audio(\n label=\"Input Audio\", sources=\"microphone\", type=\"numpy\"\n )\n with gr.Column():\n chatbot = gr.Chatbot(label=\"Conversation\")\n output_audio = gr.Audio(label=\"Output Audio\", streaming=True, autoplay=True)\n state = gr.State(value=AppState())\n\n stream = input_audio.stream(\n process_audio,\n [input_audio, state],\n [input_audio, state],\n stream_every=0.5,\n time_limit=30,\n )\n respond = input_audio.stop_recording(\n response,\n [state],\n [output_audio, state]\n )\n respond.then(lambda s: s.conversation, [state], [chatbot])\n\n restart = output_audio.stop(\n start_recording_user,\n [state],\n [input_audio]\n )\n cancel = gr.Button(\"Stop Conversation\", variant=\"stop\")\n cancel.click(lambda: (AppState(stopped=True), gr.Audio(recording=False)), None,\n [state, input_audio], cancels=[respond, restart])\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThis setup creates a user interface with:\n- An input audio component for recording user messages\n- A chatbot component to display the conversation history\n- An output audio component for the chatbot's responses\n- A button to stop and reset the conversation\n\nThe app streams user audio in 0.5-second chunks, processes it, generates responses, and updates the conversation history accordingly.\n\n", "heading1": "Building the Gradio App", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "This guide demonstrates how to build a conversational chatbot application using Gradio and the mini omni model. You can adapt this framework to create various audio-based chatbot demos. To see the full application in action, visit the Hugging Face Spaces demo: https://huggingface.co/spaces/gradio/omni-mini\n\nFeel free to experiment with different models, audio processing techniques, or user interface designs to create your own unique conversational AI experiences!", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "First, we'll install the following requirements in our system:\n\n```\nopencv-python\ntorch\ntransformers>=4.43.0\nspaces\n```\n\nThen, we'll download the model from the Hugging Face Hub:\n\n```python\nfrom transformers import RTDetrForObjectDetection, RTDetrImageProcessor\n\nimage_processor = RTDetrImageProcessor.from_pretrained(\"PekingU/rtdetr_r50vd\")\nmodel = RTDetrForObjectDetection.from_pretrained(\"PekingU/rtdetr_r50vd\").to(\"cuda\")\n```\nWe're moving the model to the GPU. We'll be deploying our model to Hugging Face Spaces and running the inference in the [free ZeroGPU cluster](https://huggingface.co/zero-gpu-explorers). \n\n\n", "heading1": "Setting up the Model", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "Our inference function will accept a video and a desired confidence threshold.\nObject detection models identify many objects and assign a confidence score to each object. The lower the confidence, the higher the chance of a false positive. So we will let our users set the confidence threshold.\n\nOur function will iterate over the frames in the video and run the RT-DETR model over each frame.\nWe will then draw the bounding boxes for each detected object in the frame and save the frame to a new output video.\nThe function will yield each output video in chunks of two seconds.\n\nIn order to keep inference times as low as possible on ZeroGPU (there is a time-based quota),\nwe will halve the original frames-per-second in the output video and resize the input frames to be half the original \nsize before running the model.\n\nThe code for the inference function is below - we'll go over it piece by piece.\n\n```python\nimport spaces\nimport cv2\nfrom PIL import Image\nimport torch\nimport time\nimport numpy as np\nimport uuid\n\nfrom draw_boxes import draw_bounding_boxes\n\nSUBSAMPLE = 2\n\n@spaces.GPU\ndef stream_object_detection(video, conf_threshold):\n cap = cv2.VideoCapture(video)\n\n This means we will output mp4 videos\n video_codec = cv2.VideoWriter_fourcc(*\"mp4v\") type: ignore\n fps = int(cap.get(cv2.CAP_PROP_FPS))\n\n desired_fps = fps // SUBSAMPLE\n width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) // 2\n height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) // 2\n\n iterating, frame = cap.read()\n\n n_frames = 0\n\n Use UUID to create a unique video file\n output_video_name = f\"output_{uuid.uuid4()}.mp4\"\n\n Output Video\n output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore\n batch = []\n\n while iterating:\n frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if n_frames % SUBSAMPLE == 0:\n batch.append(frame)\n if len(batc", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": " frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if n_frames % SUBSAMPLE == 0:\n batch.append(frame)\n if len(batch) == 2 * desired_fps:\n inputs = image_processor(images=batch, return_tensors=\"pt\").to(\"cuda\")\n\n with torch.no_grad():\n outputs = model(**inputs)\n\n boxes = image_processor.post_process_object_detection(\n outputs,\n target_sizes=torch.tensor([(height, width)] * len(batch)),\n threshold=conf_threshold)\n \n for i, (array, box) in enumerate(zip(batch, boxes)):\n pil_image = draw_bounding_boxes(Image.fromarray(array), box, model, conf_threshold)\n frame = np.array(pil_image)\n Convert RGB to BGR\n frame = frame[:, :, ::-1].copy()\n output_video.write(frame)\n\n batch = []\n output_video.release()\n yield output_video_name\n output_video_name = f\"output_{uuid.uuid4()}.mp4\"\n output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore\n\n iterating, frame = cap.read()\n n_frames += 1\n```\n\n1. **Reading from the Video**\n\nOne of the industry standards for creating videos in python is OpenCV so we will use it in this app.\n\nThe `cap` variable is how we will read from the input video. Whenever we call `cap.read()`, we are reading the next frame in the video.\n\nIn order to stream video in Gradio, we need to yield a different video file for each \"chunk\" of the output video.\nWe create the next video file to write to with the `output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height))` line. The `video_codec` is how we specify the type of video file. Only \"mp4\" and \"ts\" files are supported for video sreaming at the moment.\n\n\n2. **The Inference Loop**\n\nFor each frame i", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "dth, height))` line. The `video_codec` is how we specify the type of video file. Only \"mp4\" and \"ts\" files are supported for video sreaming at the moment.\n\n\n2. **The Inference Loop**\n\nFor each frame in the video, we will resize it to be half the size. OpenCV reads files in `BGR` format, so will convert to the expected `RGB` format of transfomers. That's what the first two lines of the while loop are doing. \n\nWe take every other frame and add it to a `batch` list so that the output video is half the original FPS. When the batch covers two seconds of video, we will run the model. The two second threshold was chosen to keep the processing time of each batch small enough so that video is smoothly displayed in the server while not requiring too many separate forward passes. In order for video streaming to work properly in Gradio, the batch size should be at least 1 second. \n\nWe run the forward pass of the model and then use the `post_process_object_detection` method of the model to scale the detected bounding boxes to the size of the input frame.\n\nWe make use of a custom function to draw the bounding boxes (source [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection/blob/main/draw_boxes.pyL14)). We then have to convert from `RGB` to `BGR` before writing back to the output video.\n\nOnce we have finished processing the batch, we create a new output video file for the next batch.\n\n", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "The UI code is pretty similar to other kinds of Gradio apps. \nWe'll use a standard two-column layout so that users can see the input and output videos side by side.\n\nIn order for streaming to work, we have to set `streaming=True` in the output video. Setting the video\nto autoplay is not necessary but it's a better experience for users.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as app:\n gr.HTML(\n \"\"\"\n

\n Video Object Detection with RT-DETR\n

\n \"\"\")\n with gr.Row():\n with gr.Column():\n video = gr.Video(label=\"Video Source\")\n conf_threshold = gr.Slider(\n label=\"Confidence Threshold\",\n minimum=0.0,\n maximum=1.0,\n step=0.05,\n value=0.30,\n )\n with gr.Column():\n output_video = gr.Video(label=\"Processed Video\", streaming=True, autoplay=True)\n\n video.upload(\n fn=stream_object_detection,\n inputs=[video, conf_threshold],\n outputs=[output_video],\n )\n\n\n```\n\n\n", "heading1": "The Gradio Demo", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "You can check out our demo hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection). \n\nIt is also embedded on this page below\n\n$demo_rt-detr-object-detection", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "Modern voice applications should feel natural and responsive, moving beyond the traditional \"click-to-record\" pattern. By combining Groq's fast inference capabilities with automatic speech detection, we can create a more intuitive interaction model where users can simply start talking whenever they want to engage with the AI.\n\n> Credits: VAD and Gradio code inspired by [WillHeld's Diva-audio-chat](https://huggingface.co/spaces/WillHeld/diva-audio-chat/tree/main).\n\nIn this tutorial, you will learn how to create a multimodal Gradio and Groq app that has automatic speech detection. You can also watch the full video tutorial which includes a demo of the application:\n\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "Many voice apps currently work by the user clicking record, speaking, then stopping the recording. While this can be a powerful demo, the most natural mode of interaction with voice requires the app to dynamically detect when the user is speaking, so they can talk back and forth without having to continually click a record button. \n\nCreating a natural interaction with voice and text requires a dynamic and low-latency response. Thus, we need both automatic voice detection and fast inference. With @ricky0123/vad-web powering speech detection and Groq powering the LLM, both of these requirements are met. Groq provides a lightning fast response, and Gradio allows for easy creation of impressively functional apps.\n\nThis tutorial shows you how to build a calorie tracking app where you speak to an AI that automatically detects when you start and stop your response, and provides its own text response back to guide you with questions that allow it to give a calorie estimate of your last meal.\n\n", "heading1": "Background", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "- **Gradio**: Provides the web interface and audio handling capabilities\n- **@ricky0123/vad-web**: Handles voice activity detection\n- **Groq**: Powers fast LLM inference for natural conversations\n- **Whisper**: Transcribes speech to text\n\nSetting Up the Environment\n\nFirst, let\u2019s install and import our essential libraries and set up a client for using the Groq API. Here\u2019s how to do it:\n\n`requirements.txt`\n```\ngradio\ngroq\nnumpy\nsoundfile\nlibrosa\nspaces\nxxhash\ndatasets\n```\n\n`app.py`\n```python\nimport groq\nimport gradio as gr\nimport soundfile as sf\nfrom dataclasses import dataclass, field\nimport os\n\nInitialize Groq client securely\napi_key = os.environ.get(\"GROQ_API_KEY\")\nif not api_key:\n raise ValueError(\"Please set the GROQ_API_KEY environment variable.\")\nclient = groq.Client(api_key=api_key)\n```\n\nHere, we\u2019re pulling in key libraries to interact with the Groq API, build a sleek UI with Gradio, and handle audio data. We\u2019re accessing the Groq API key securely with a key stored in an environment variable, which is a security best practice for avoiding leaking the API key.\n\n---\n\nState Management for Seamless Conversations\n\nWe need a way to keep track of our conversation history, so the chatbot remembers past interactions, and manage other states like whether recording is currently active. To do this, let\u2019s create an `AppState` class:\n\n```python\n@dataclass\nclass AppState:\n conversation: list = field(default_factory=list)\n stopped: bool = False\n model_outs: Any = None\n```\n\nOur `AppState` class is a handy tool for managing conversation history and tracking whether recording is on or off. Each instance will have its own fresh list of conversations, making sure chat history is isolated to each session. \n\n---\n\nTranscribing Audio with Whisper on Groq\n\nNext, we\u2019ll create a function to transcribe the user\u2019s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there\u2019s meani", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "e\u2019ll create a function to transcribe the user\u2019s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there\u2019s meaningful speech in the input. Here\u2019s how:\n\n```python\ndef transcribe_audio(client, file_name):\n if file_name is None:\n return None\n\n try:\n with open(file_name, \"rb\") as audio_file:\n response = client.audio.transcriptions.with_raw_response.create(\n model=\"whisper-large-v3-turbo\",\n file=(\"audio.wav\", audio_file),\n response_format=\"verbose_json\",\n )\n completion = process_whisper_response(response.parse())\n return completion\n except Exception as e:\n print(f\"Error in transcription: {e}\")\n return f\"Error in transcription: {str(e)}\"\n```\n\nThis function opens the audio file and sends it to Groq\u2019s Whisper model for transcription, requesting detailed JSON output. verbose_json is needed to get information to determine if speech was included in the audio. We also handle any potential errors so our app doesn\u2019t fully crash if there\u2019s an issue with the API request. \n\n```python\ndef process_whisper_response(completion):\n \"\"\"\n Process Whisper transcription response and return text or null based on no_speech_prob\n \n Args:\n completion: Whisper transcription response object\n \n Returns:\n str or None: Transcribed text if no_speech_prob <= 0.7, otherwise None\n \"\"\"\n if completion.segments and len(completion.segments) > 0:\n no_speech_prob = completion.segments[0].get('no_speech_prob', 0)\n print(\"No speech prob:\", no_speech_prob)\n\n if no_speech_prob > 0.7:\n return None\n \n return completion.text.strip()\n \n return None\n```\n\nWe also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was j", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "ext.strip()\n \n return None\n```\n\nWe also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was just background noise or had actual speaking that was transcribed. It uses a threshold of 0.7 to interpret the no_speech_prob, and will return None if there was no speech. Otherwise, it will return the text transcript of the conversational response from the human.\n\n\n---\n\nAdding Conversational Intelligence with LLM Integration\n\nOur chatbot needs to provide intelligent, friendly responses that flow naturally. We\u2019ll use a Groq-hosted Llama-3.2 for this:\n\n```python\ndef generate_chat_completion(client, history):\n messages = []\n messages.append(\n {\n \"role\": \"system\",\n \"content\": \"In conversation with the user, ask questions to estimate and provide (1) total calories, (2) protein, carbs, and fat in grams, (3) fiber and sugar content. Only ask *one question at a time*. Be conversational and natural.\",\n }\n )\n\n for message in history:\n messages.append(message)\n\n try:\n completion = client.chat.completions.create(\n model=\"llama-3.2-11b-vision-preview\",\n messages=messages,\n )\n return completion.choices[0].message.content\n except Exception as e:\n return f\"Error in generating chat completion: {str(e)}\"\n```\n\nWe\u2019re defining a system prompt to guide the chatbot\u2019s behavior, ensuring it asks one question at a time and keeps things conversational. This setup also includes error handling to ensure the app gracefully manages any issues.\n\n---\n\nVoice Activity Detection for Hands-Free Interaction\n\nTo make our chatbot hands-free, we\u2019ll add Voice Activity Detection (VAD) to automatically detect when someone starts or stops speaking. Here\u2019s how to implement it using ONNX in JavaScript:\n\n```javascript\nasync function main() {\n const script1 = document.createElement(\"script\");\n scrip", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "ly detect when someone starts or stops speaking. Here\u2019s how to implement it using ONNX in JavaScript:\n\n```javascript\nasync function main() {\n const script1 = document.createElement(\"script\");\n script1.src = \"https://cdn.jsdelivr.net/npm/onnxruntime-web@1.14.0/dist/ort.js\";\n document.head.appendChild(script1)\n const script2 = document.createElement(\"script\");\n script2.onload = async () => {\n console.log(\"vad loaded\");\n var record = document.querySelector('.record-button');\n record.textContent = \"Just Start Talking!\"\n \n const myvad = await vad.MicVAD.new({\n onSpeechStart: () => {\n var record = document.querySelector('.record-button');\n var player = document.querySelector('streaming-out')\n if (record != null && (player == null || player.paused)) {\n record.click();\n }\n },\n onSpeechEnd: (audio) => {\n var stop = document.querySelector('.stop-button');\n if (stop != null) {\n stop.click();\n }\n }\n })\n myvad.start()\n }\n script2.src = \"https://cdn.jsdelivr.net/npm/@ricky0123/vad-web@0.0.7/dist/bundle.min.js\";\n}\n```\n\nThis script loads our VAD model and sets up functions to start and stop recording automatically. When the user starts speaking, it triggers the recording, and when they stop, it ends the recording.\n\n---\n\nBuilding a User Interface with Gradio\n\nNow, let\u2019s create an intuitive and visually appealing user interface with Gradio. This interface will include an audio input for capturing voice, a chat window for displaying responses, and state management to keep things synchronized.\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n input_audio = gr.Audio(\n label=\"Input Audio\",\n sources=[\"microphone\"],\n type=\"numpy\",\n streaming=False,\n waveform_options=gr.WaveformOptions(waveform_color=\"B83A4B\"),\n )\n with gr.Row():\n chatbot = gr.Chatbot(label=\"Conversation\")\n state = g", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "\",\n streaming=False,\n waveform_options=gr.WaveformOptions(waveform_color=\"B83A4B\"),\n )\n with gr.Row():\n chatbot = gr.Chatbot(label=\"Conversation\")\n state = gr.State(value=AppState())\ndemo.launch(theme=theme, js=js)\n```\n\nIn this code block, we\u2019re using Gradio\u2019s `Blocks` API to create an interface with an audio input, a chat display, and an application state manager. The color customization for the waveform adds a nice visual touch.\n\n---\n\nHandling Recording and Responses\n\nFinally, let\u2019s link the recording and response components to ensure the app reacts smoothly to user inputs and provides responses in real-time.\n\n```python\n stream = input_audio.start_recording(\n process_audio,\n [input_audio, state],\n [input_audio, state],\n )\n respond = input_audio.stop_recording(\n response, [state, input_audio], [state, chatbot]\n )\n```\n\nThese lines set up event listeners for starting and stopping the recording, processing the audio input, and generating responses. By linking these events, we create a cohesive experience where users can simply talk, and the chatbot handles the rest.\n\n---\n\n", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "1. When you open the app, the VAD system automatically initializes and starts listening for speech\n2. As soon as you start talking, it triggers the recording automatically\n3. When you stop speaking, the recording ends and:\n - The audio is transcribed using Whisper\n - The transcribed text is sent to the LLM\n - The LLM generates a response about calorie tracking\n - The response is displayed in the chat interface\n4. This creates a natural back-and-forth conversation where you can simply talk about your meals and get instant feedback on nutritional content\n\nThis app demonstrates how to create a natural voice interface that feels responsive and intuitive. By combining Groq's fast inference with automatic speech detection, we've eliminated the need for manual recording controls while maintaining high-quality interactions. The result is a practical calorie tracking assistant that users can simply talk to as naturally as they would to a human nutritionist.\n\nLink to GitHub repository: [Groq Gradio Basics](https://github.com/bklieger-groq/gradio-groq-basics/tree/main/calorie-tracker)", "heading1": "Summary", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "Adding examples to an Interface is as easy as providing a list of lists to the `examples`\nkeyword argument.\nEach sublist is a data sample, where each element corresponds to an input of the prediction function.\nThe inputs must be ordered in the same order as the prediction function expects them.\n\nIf your interface only has one input component, then you can provide your examples as a regular list instead of a list of lists.\n\nLoading Examples from a Directory\n\nYou can also specify a path to a directory containing your examples. If your Interface takes only a single file-type input, e.g. an image classifier, you can simply pass a directory filepath to the `examples=` argument, and the `Interface` will load the images in the directory as examples.\nIn the case of multiple inputs, this directory must\ncontain a log.csv file with the example values.\nIn the context of the calculator demo, we can set `examples='/demo/calculator/examples'` and in that directory we include the following `log.csv` file:\n\n```csv\nnum,operation,num2\n5,\"add\",3\n4,\"divide\",2\n5,\"multiply\",3\n```\n\nThis can be helpful when browsing flagged data. Simply point to the flagged directory and the `Interface` will load the examples from the flagged data.\n\nProviding Partial Examples\n\nSometimes your app has many input components, but you would only like to provide examples for a subset of them. In order to exclude some inputs from the examples, pass `None` for all data samples corresponding to those particular components.\n\n", "heading1": "Providing Examples", "source_page_url": "https://gradio.app/guides/more-on-examples", "source_page_title": "Building Interfaces - More On Examples Guide"}, {"text": "You may wish to provide some cached examples of your model for users to quickly try out, in case your model takes a while to run normally.\nIf `cache_examples=True`, your Gradio app will run all of the examples and save the outputs when you call the `launch()` method. This data will be saved in a directory called `gradio_cached_examples` in your working directory by default. You can also set this directory with the `GRADIO_EXAMPLES_CACHE` environment variable, which can be either an absolute path or a relative path to your working directory.\n\nWhenever a user clicks on an example, the output will automatically be populated in the app now, using data from this cached directory instead of actually running the function. This is useful so users can quickly try out your model without adding any load!\n\nAlternatively, you can set `cache_examples=\"lazy\"`. This means that each particular example will only get cached after it is first used (by any user) in the Gradio app. This is helpful if your prediction function is long-running and you do not want to wait a long time for your Gradio app to start.\n\nKeep in mind once the cache is generated, it will not be updated automatically in future launches. If the examples or function logic change, delete the cache folder to clear the cache and rebuild it with another `launch()`.\n", "heading1": "Caching examples", "source_page_url": "https://gradio.app/guides/more-on-examples", "source_page_title": "Building Interfaces - More On Examples Guide"}, {"text": "If the state is something that should be accessible to all function calls and all users, you can create a variable outside the function call and access it inside the function. For example, you may load a large model outside the function and use it inside the function so that every function call does not need to reload the model.\n\n$code_score_tracker\n\nIn the code above, the `scores` array is shared between all users. If multiple users are accessing this demo, their scores will all be added to the same list, and the returned top 3 scores will be collected from this shared reference.\n\n", "heading1": "Global State", "source_page_url": "https://gradio.app/guides/interface-state", "source_page_title": "Building Interfaces - Interface State Guide"}, {"text": "Another type of data persistence Gradio supports is session state, where data persists across multiple submits within a page session. However, data is _not_ shared between different users of your model. To store data in a session state, you need to do three things:\n\n1. Pass in an extra parameter into your function, which represents the state of the interface.\n2. At the end of the function, return the updated value of the state as an extra return value.\n3. Add the `'state'` input and `'state'` output components when creating your `Interface`\n\nHere's a simple app to illustrate session state - this app simply stores users previous submissions and displays them back to the user:\n\n\n$code_interface_state\n$demo_interface_state\n\n\nNotice how the state persists across submits within each page, but if you load this demo in another tab (or refresh the page), the demos will not share chat history. Here, we could not store the submission history in a global variable, otherwise the submission history would then get jumbled between different users.\n\nThe initial value of the `State` is `None` by default. If you pass a parameter to the `value` argument of `gr.State()`, it is used as the default value of the state instead. \n\nNote: the `Interface` class only supports a single session state variable (though it can be a list with multiple elements). For more complex use cases, you can use Blocks, [which supports multiple `State` variables](/guides/state-in-blocks/). Alternatively, if you are building a chatbot that maintains user state, consider using the `ChatInterface` abstraction, [which manages state automatically](/guides/creating-a-chatbot-fast).\n", "heading1": "Session State", "source_page_url": "https://gradio.app/guides/interface-state", "source_page_title": "Building Interfaces - Interface State Guide"}, {"text": "To create a demo that has both the input and the output components, you simply need to set the values of the `inputs` and `outputs` parameter in `Interface()`. Here's an example demo of a simple image filter:\n\n$code_sepia_filter\n$demo_sepia_filter\n\n", "heading1": "Standard demos", "source_page_url": "https://gradio.app/guides/four-kinds-of-interfaces", "source_page_title": "Building Interfaces - Four Kinds Of Interfaces Guide"}, {"text": "What about demos that only contain outputs? In order to build such a demo, you simply set the value of the `inputs` parameter in `Interface()` to `None`. Here's an example demo of a mock image generation model:\n\n$code_fake_gan_no_input\n$demo_fake_gan_no_input\n\n", "heading1": "Output-only demos", "source_page_url": "https://gradio.app/guides/four-kinds-of-interfaces", "source_page_title": "Building Interfaces - Four Kinds Of Interfaces Guide"}, {"text": "Similarly, to create a demo that only contains inputs, set the value of `outputs` parameter in `Interface()` to be `None`. Here's an example demo that saves any uploaded image to disk:\n\n$code_save_file_no_output\n$demo_save_file_no_output\n\n", "heading1": "Input-only demos", "source_page_url": "https://gradio.app/guides/four-kinds-of-interfaces", "source_page_title": "Building Interfaces - Four Kinds Of Interfaces Guide"}, {"text": "A demo that has a single component as both the input and the output. It can simply be created by setting the values of the `inputs` and `outputs` parameter as the same component. Here's an example demo of a text generation model:\n\n$code_unified_demo_text_generation\n$demo_unified_demo_text_generation\n\nIt may be the case that none of the 4 cases fulfill your exact needs. In this case, you need to use the `gr.Blocks()` approach!", "heading1": "Unified demos", "source_page_url": "https://gradio.app/guides/four-kinds-of-interfaces", "source_page_title": "Building Interfaces - Four Kinds Of Interfaces Guide"}, {"text": "Gradio includes more than 30 pre-built components (as well as many [community-built _custom components_](https://www.gradio.app/custom-components/gallery)) that can be used as inputs or outputs in your demo. These components correspond to common data types in machine learning and data science, e.g. the `gr.Image` component is designed to handle input or output images, the `gr.Label` component displays classification labels and probabilities, the `gr.LinePlot` component displays line plots, and so on. \n\n", "heading1": "Gradio Components", "source_page_url": "https://gradio.app/guides/the-interface-class", "source_page_title": "Building Interfaces - The Interface Class Guide"}, {"text": "We used the default versions of the `gr.Textbox` and `gr.Slider`, but what if you want to change how the UI components look or behave?\n\nLet's say you want to customize the slider to have values from 1 to 10, with a default of 2. And you wanted to customize the output text field \u2014 you want it to be larger and have a label.\n\nIf you use the actual classes for `gr.Textbox` and `gr.Slider` instead of the string shortcuts, you have access to much more customizability through component attributes.\n\n$code_hello_world_2\n$demo_hello_world_2\n\n", "heading1": "Components Attributes", "source_page_url": "https://gradio.app/guides/the-interface-class", "source_page_title": "Building Interfaces - The Interface Class Guide"}, {"text": "Suppose you had a more complex function, with multiple outputs as well. In the example below, we define a function that takes a string, boolean, and number, and returns a string and number. \n\n$code_hello_world_3\n$demo_hello_world_3\n\nJust as each component in the `inputs` list corresponds to one of the parameters of the function, in order, each component in the `outputs` list corresponds to one of the values returned by the function, in order.\n\n", "heading1": "Multiple Input and Output Components", "source_page_url": "https://gradio.app/guides/the-interface-class", "source_page_title": "Building Interfaces - The Interface Class Guide"}, {"text": "Gradio supports many types of components, such as `Image`, `DataFrame`, `Video`, or `Label`. Let's try an image-to-image function to get a feel for these!\n\n$code_sepia_filter\n$demo_sepia_filter\n\nWhen using the `Image` component as input, your function will receive a NumPy array with the shape `(height, width, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a NumPy array. \n\nGradio handles the preprocessing and postprocessing to convert images to NumPy arrays and vice versa. You can also control the preprocessing performed with the `type=` keyword argument. For example, if you wanted your function to take a file path to an image instead of a NumPy array, the input `Image` component could be written as:\n\n```python\ngr.Image(type=\"filepath\")\n```\n\nYou can read more about the built-in Gradio components and how to customize them in the [Gradio docs](https://gradio.app/docs).\n\n", "heading1": "An Image Example", "source_page_url": "https://gradio.app/guides/the-interface-class", "source_page_title": "Building Interfaces - The Interface Class Guide"}, {"text": "You can provide example data that a user can easily load into `Interface`. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you can provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docscomponents).\n\n$code_calculator\n$demo_calculator\n\nYou can load a large dataset into the examples to browse and interact with the dataset through Gradio. The examples will be automatically paginated (you can configure this through the `examples_per_page` argument of `Interface`).\n\nContinue learning about examples in the [More On Examples](https://gradio.app/guides/more-on-examples) guide.\n\n", "heading1": "Example Inputs", "source_page_url": "https://gradio.app/guides/the-interface-class", "source_page_title": "Building Interfaces - The Interface Class Guide"}, {"text": "In the previous example, you may have noticed the `title=` and `description=` keyword arguments in the `Interface` constructor that helps users understand your app.\n\nThere are three arguments in the `Interface` constructor to specify where this content should go:\n\n- `title`: which accepts text and can display it at the very top of interface, and also becomes the page title.\n- `description`: which accepts text, markdown or HTML and places it right under the title.\n- `article`: which also accepts text, markdown or HTML and places it below the interface.\n\n![annotated](https://github.com/gradio-app/gradio/blob/main/guides/assets/annotated.png?raw=true)\n\nAnother useful keyword argument is `label=`, which is present in every `Component`. This modifies the label text at the top of each `Component`. You can also add the `info=` keyword argument to form elements like `Textbox` or `Radio` to provide further information on their usage.\n\n```python\ngr.Number(label='Age', info='In years, must be greater than 0')\n```\n\n", "heading1": "Descriptive Content", "source_page_url": "https://gradio.app/guides/the-interface-class", "source_page_title": "Building Interfaces - The Interface Class Guide"}, {"text": "If your prediction function takes many inputs, you may want to hide some of them within a collapsed accordion to avoid cluttering the UI. The `Interface` class takes an `additional_inputs` argument which is similar to `inputs` but any input components included here are not visible by default. The user must click on the accordion to show these components. The additional inputs are passed into the prediction function, in order, after the standard inputs.\n\nYou can customize the appearance of the accordion by using the optional `additional_inputs_accordion` argument, which accepts a string (in which case, it becomes the label of the accordion), or an instance of the `gr.Accordion()` class (e.g. this lets you control whether the accordion is open or closed by default).\n\nHere's an example:\n\n$code_interface_with_additional_inputs\n$demo_interface_with_additional_inputs\n\n", "heading1": "Additional Inputs within an Accordion", "source_page_url": "https://gradio.app/guides/the-interface-class", "source_page_title": "Building Interfaces - The Interface Class Guide"}, {"text": "You can make interfaces automatically refresh by setting `live=True` in the interface. Now the interface will recalculate as soon as the user input changes.\n\n$code_calculator_live\n$demo_calculator_live\n\nNote there is no submit button, because the interface resubmits automatically on change.\n\n", "heading1": "Live Interfaces", "source_page_url": "https://gradio.app/guides/reactive-interfaces", "source_page_title": "Building Interfaces - Reactive Interfaces Guide"}, {"text": "Some components have a \"streaming\" mode, such as `Audio` component in microphone mode, or the `Image` component in webcam mode. Streaming means data is sent continuously to the backend and the `Interface` function is continuously being rerun.\n\nThe difference between `gr.Audio(source='microphone')` and `gr.Audio(source='microphone', streaming=True)`, when both are used in `gr.Interface(live=True)`, is that the first `Component` will automatically submit data and run the `Interface` function when the user stops recording, whereas the second `Component` will continuously send data and run the `Interface` function _during_ recording.\n\nHere is example code of streaming images from the webcam.\n\n$code_stream_frames\n\nStreaming can also be done in an output component. A `gr.Audio(streaming=True)` output component can take a stream of audio data yielded piece-wise by a generator function and combines them into a single audio file. For a detailed example, see our guide on performing [automatic speech recognition](/guides/real-time-speech-recognition) with Gradio.\n", "heading1": "Streaming Components", "source_page_url": "https://gradio.app/guides/reactive-interfaces", "source_page_title": "Building Interfaces - Reactive Interfaces Guide"}, {"text": "Let's start with what seems like the most complex bit -- using machine learning to remove the music from a video.\n\nLuckily for us, there's an existing Space we can use to make this process easier: [https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation). This Space takes an audio file and produces two separate audio files: one with the instrumental music and one with all other sounds in the original clip. Perfect to use with our client!\n\nOpen a new Python file, say `main.py`, and start by importing the `Client` class from `gradio_client` and connecting it to this Space:\n\n```py\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/music-separation\")\n\ndef acapellify(audio_path):\n result = client.predict(handle_file(audio_path), api_name=\"/predict\")\n return result[0]\n```\n\nThat's all the code that's needed -- notice that the API endpoints returns two audio files (one without the music, and one with just the music) in a list, and so we just return the first element of the list.\n\n---\n\n**Note**: since this is a public Space, there might be other users using this Space as well, which might result in a slow experience. You can duplicate this Space with your own [Hugging Face token](https://huggingface.co/settings/tokens) and create a private Space that only you have will have access to and bypass the queue. To do that, simply replace the first two lines above with:\n\n```py\nfrom gradio_client import Client\n\nclient = Client.duplicate(\"abidlabs/music-separation\", token=YOUR_HF_TOKEN)\n```\n\nEverything else remains the same!\n\n---\n\nNow, of course, we are working with video files, so we first need to extract the audio from the video files. For this, we will be using the `ffmpeg` library, which does a lot of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:\n\nOur video proc", "heading1": "Step 1: Write the Video Processing Function", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "f heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:\n\nOur video processing workflow will consist of three steps:\n\n1. First, we start by taking in a video filepath and extracting the audio using `ffmpeg`.\n2. Then, we pass in the audio file through the `acapellify()` function above.\n3. Finally, we combine the new audio with the original video to produce a final acapellified video.\n\nHere's the complete code in Python, which you can add to your `main.py` file:\n\n```python\nimport subprocess\n\ndef process_video(video_path):\n old_audio = os.path.basename(video_path).split(\".\")[0] + \".m4a\"\n subprocess.run(['ffmpeg', '-y', '-i', video_path, '-vn', '-acodec', 'copy', old_audio])\n\n new_audio = acapellify(old_audio)\n\n new_video = f\"acap_{video_path}\"\n subprocess.call(['ffmpeg', '-y', '-i', video_path, '-i', new_audio, '-map', '0:v', '-map', '1:a', '-c:v', 'copy', '-c:a', 'aac', '-strict', 'experimental', f\"static/{new_video}\"])\n return new_video\n```\n\nYou can read up on [ffmpeg documentation](https://ffmpeg.org/ffmpeg.html) if you'd like to understand all of the command line parameters, as they are beyond the scope of this tutorial.\n\n", "heading1": "Step 1: Write the Video Processing Function", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Next up, we'll create a simple FastAPI app. If you haven't used FastAPI before, check out [the great FastAPI docs](https://fastapi.tiangolo.com/). Otherwise, this basic template, which we add to `main.py`, will look pretty familiar:\n\n```python\nimport os\nfrom fastapi import FastAPI, File, UploadFile, Request\nfrom fastapi.responses import HTMLResponse, RedirectResponse\nfrom fastapi.staticfiles import StaticFiles\nfrom fastapi.templating import Jinja2Templates\n\napp = FastAPI()\nos.makedirs(\"static\", exist_ok=True)\napp.mount(\"/static\", StaticFiles(directory=\"static\"), name=\"static\")\ntemplates = Jinja2Templates(directory=\"templates\")\n\nvideos = []\n\n@app.get(\"/\", response_class=HTMLResponse)\nasync def home(request: Request):\n return templates.TemplateResponse(\n \"home.html\", {\"request\": request, \"videos\": videos})\n\n@app.post(\"/uploadvideo/\")\nasync def upload_video(video: UploadFile = File(...)):\n video_path = video.filename\n with open(video_path, \"wb+\") as fp:\n fp.write(video.file.read())\n\n new_video = process_video(video.filename)\n videos.append(new_video)\n return RedirectResponse(url='/', status_code=303)\n```\n\nIn this example, the FastAPI app has two routes: `/` and `/uploadvideo/`.\n\nThe `/` route returns an HTML template that displays a gallery of all uploaded videos.\n\nThe `/uploadvideo/` route accepts a `POST` request with an `UploadFile` object, which represents the uploaded video file. The video file is \"acapellified\" via the `process_video()` method, and the output video is stored in a list which stores all of the uploaded videos in memory.\n\nNote that this is a very basic example and if this were a production app, you will need to add more logic to handle file storage, user authentication, and security considerations.\n\n", "heading1": "Step 2: Create a FastAPI app (Backend Routes)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Finally, we create the frontend of our web application. First, we create a folder called `templates` in the same directory as `main.py`. We then create a template, `home.html` inside the `templates` folder. Here is the resulting file structure:\n\n```csv\n\u251c\u2500\u2500 main.py\n\u251c\u2500\u2500 templates\n\u2502 \u2514\u2500\u2500 home.html\n```\n\nWrite the following as the contents of `home.html`:\n\n```html\n<!DOCTYPE html> <html> <head> <title>Video Gallery</title>\n<style> body { font-family: sans-serif; margin: 0; padding: 0;\nbackground-color: f5f5f5; } h1 { text-align: center; margin-top: 30px;\nmargin-bottom: 20px; } .gallery { display: flex; flex-wrap: wrap;\njustify-content: center; gap: 20px; padding: 20px; } .video { border: 2px solid\nccc; box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2); border-radius: 5px; overflow:\nhidden; width: 300px; margin-bottom: 20px; } .video video { width: 100%; height:\n200px; } .video p { text-align: center; margin: 10px 0; } form { margin-top:\n20px; text-align: center; } input[type=\"file\"] { display: none; } .upload-btn {\ndisplay: inline-block; background-color: 3498db; color: fff; padding: 10px\n20px; font-size: 16px; border: none; border-radius: 5px; cursor: pointer; }\n.upload-btn:hover { background-color: 2980b9; } .file-name { margin-left: 10px;\n} </style> </head> <body> <h1>Video Gallery</h1> {% if videos %}\n<div class=\"gallery\"> {% for video in videos %} <div class=\"video\">\n<video controls> <source src=\"{{ url_for('static', path=video) }}\"\ntype=\"video/mp4\"> Your browser does not support the video tag. </video>\n<p>{{ video }}</p> </div> {% endfor %} </div> {% else %} <p>No\nvideos uploaded yet.</p> {% endif %} <form action=\"/uploadvideo/\"\nmethod=\"post\" enctype=\"multipart/form-data\"> <label for=\"video-upload\"\nclass=\"upload-btn\">Choose video file</label> <input type=\"file\"\nname=\"video\" id=\"video-upload\"> <span class=\"file-name\"></span> <button\ntype=\"submit\" class=\"upload-btn\">Upload</butto", "heading1": "Step 3: Create a FastAPI app (Frontend Template)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "class=\"upload-btn\">Choose video file</label> <input type=\"file\"\nname=\"video\" id=\"video-upload\"> <span class=\"file-name\"></span> <button\ntype=\"submit\" class=\"upload-btn\">Upload</button> </form> <script> //\nDisplay selected file name in the form const fileUpload =\ndocument.getElementById(\"video-upload\"); const fileName =\ndocument.querySelector(\".file-name\"); fileUpload.addEventListener(\"change\", (e)\n=> { fileName.textContent = e.target.files[0].name; }); </script> </body>\n</html>\n```\n\n", "heading1": "Step 3: Create a FastAPI app (Frontend Template)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Finally, we are ready to run our FastAPI app, powered by the Gradio Python Client!\n\nOpen up a terminal and navigate to the directory containing `main.py`. Then run the following command in the terminal:\n\n```bash\n$ uvicorn main:app\n```\n\nYou should see an output that looks like this:\n\n```csv\nLoaded as API: https://abidlabs-music-separation.hf.space \u2714\nINFO: Started server process [1360]\nINFO: Waiting for application startup.\nINFO: Application startup complete.\nINFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)\n```\n\nAnd that's it! Start uploading videos and you'll get some \"acapellified\" videos in response (might take seconds to minutes to process depending on the length of your videos). Here's how the UI looks after uploading two videos:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/acapellify.png)\n\nIf you'd like to learn more about how to use the Gradio Python Client in your projects, [read the dedicated Guide](/guides/getting-started-with-the-python-client/).\n", "heading1": "Step 4: Run your FastAPI app", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "If you already have a recent version of `gradio`, then the `gradio_client` is included as a dependency. But note that this documentation reflects the latest version of the `gradio_client`, so upgrade if you're not sure!\n\nThe lightweight `gradio_client` package can be installed from pip (or pip3) and is tested to work with **Python versions 3.10 or higher**:\n\n```bash\n$ pip install --upgrade gradio_client\n```\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Start by connecting instantiating a `Client` object and connecting it to a Gradio app that is running on Hugging Face Spaces.\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/en2fr\") a Space that translates from English to French\n```\n\nYou can also connect to private Spaces by passing in your HF token with the `token` parameter. You can get your HF token here: https://huggingface.co/settings/tokens\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/my-private-space\", token=\"...\")\n```\n\n\n", "heading1": "Connecting to a Gradio App on Hugging Face Spaces", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,\nand then use it to make as many requests as you'd like!\n\nThe `gradio_client` includes a class method: `Client.duplicate()` to make this process simple (you'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens) or be logged in using the Hugging Face CLI):\n\n```python\nimport os\nfrom gradio_client import Client, handle_file\n\nHF_TOKEN = os.environ.get(\"HF_TOKEN\")\n\nclient = Client.duplicate(\"abidlabs/whisper\", token=HF_TOKEN)\nclient.predict(handle_file(\"audio_sample.wav\"))\n\n>> \"This is a test of the whisper speech recognition model.\"\n```\n\nIf you have previously duplicated a Space, re-running `duplicate()` will _not_ create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.\n\n**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 1 hour of inactivity. You can also set the hardware using the `hardware` parameter of `duplicate()`.\n\n", "heading1": "Duplicating a Space for private use", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "If your app is running somewhere else, just provide the full URL instead, including the \"http://\" or \"https://\". Here's an example of making predictions to a Gradio app that is running on a share URL:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"https://bec81a83-5b5c-471e.gradio.live\")\n```\n\n", "heading1": "Connecting a general Gradio app", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-appauthentication), then provide them as a tuple to the `auth` argument of the `Client` class:\n\n```python\nfrom gradio_client import Client\n\nClient(\n space_name,\n auth=[username, password]\n)\n```\n\n\n", "heading1": "Connecting to a Gradio app with auth", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client.view_api()` method. For the Whisper Space, we see the following:\n\n```bash\nClient.predict() Usage Info\n---------------------------\nNamed API endpoints: 1\n\n - predict(audio, api_name=\"/predict\") -> output\n Parameters:\n - [Audio] audio: filepath (required) \n Returns:\n - [Textbox] output: str \n```\n\nWe see that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.\n\nWe should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available.\n\n", "heading1": "Inspecting the API endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "As an alternative to running the `.view_api()` method, you can click on the \"Use via API\" link in the footer of the Gradio app, which shows us the same information, along with example usage. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe View API page also includes an \"API Recorder\" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the Python Client.\n\n", "heading1": "The \"View API\" Page", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The simplest way to make a prediction is simply to call the `.predict()` function with the appropriate arguments:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/en2fr\")\nclient.predict(\"Hello\", api_name='/predict')\n\n>> Bonjour\n```\n\nIf there are multiple parameters, then you should pass them as separate arguments to `.predict()`, like this:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"gradio/calculator\")\nclient.predict(4, \"add\", 5)\n\n>> 9.0\n```\n\nIt is recommended to provide key-word arguments instead of positional arguments:\n\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"gradio/calculator\")\nclient.predict(num1=4, operation=\"add\", num2=5)\n\n>> 9.0\n```\n\nThis allows you to take advantage of default arguments. For example, this Space includes the default value for the Slider component so you do not need to provide it when accessing it with the client.\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/image_generator\")\nclient.predict(text=\"an astronaut riding a camel\")\n```\n\nThe default value is the initial value of the corresponding Gradio component. If the component does not have an initial value, but if the corresponding argument in the predict function has a default value of `None`, then that parameter is also optional in the client. Of course, if you'd like to override it, you can include it as well:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/image_generator\")\nclient.predict(text=\"an astronaut riding a camel\", steps=25)\n```\n\nFor providing files or URLs as inputs, you should pass in the filepath or URL to the file enclosed within `gradio_client.handle_file()`. This takes care of uploading the file to the Gradio server and ensures that the file is preprocessed correctly:\n\n```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/whisper\")\nclient.predict(\n audio=handle_file(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/s", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/whisper\")\nclient.predict(\n audio=handle_file(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\")\n)\n\n>> \"My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r\u2014\"\n```\n\n", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "One should note that `.predict()` is a _blocking_ operation as it waits for the operation to complete before returning the prediction.\n\nIn many cases, you may be better off letting the job run in the background until you need the results of the prediction. You can do this by creating a `Job` instance using the `.submit()` method, and then later calling `.result()` on the job to get the result. For example:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(space=\"abidlabs/en2fr\")\njob = client.submit(\"Hello\", api_name=\"/predict\") This is not blocking\n\nDo something else\n\njob.result() This is blocking\n\n>> Bonjour\n```\n\n", "heading1": "Running jobs asynchronously", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Alternatively, one can add one or more callbacks to perform actions after the job has completed running, like this:\n\n```python\nfrom gradio_client import Client\n\ndef print_result(x):\n print(\"The translated result is: {x}\")\n\nclient = Client(space=\"abidlabs/en2fr\")\n\njob = client.submit(\"Hello\", api_name=\"/predict\", result_callbacks=[print_result])\n\nDo something else\n\n>> The translated result is: Bonjour\n\n```\n\n", "heading1": "Adding callbacks", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The `Job` object also allows you to get the status of the running job by calling the `.status()` method. This returns a `StatusUpdate` object with the following attributes: `code` (the status code, one of a set of defined strings representing the status. See the `utils.Status` class), `rank` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` (the time that the status was generated).\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/calculator\")\njob = client.submit(5, \"add\", 4, api_name=\"/predict\")\njob.status()\n\n>> \n```\n\n_Note_: The `Job` class also has a `.done()` instance method which returns a boolean indicating whether the job has completed.\n\n", "heading1": "Status", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The `Job` class also has a `.cancel()` instance method that cancels jobs that have been queued but not started. For example, if you run:\n\n```py\nclient = Client(\"abidlabs/whisper\")\njob1 = client.submit(handle_file(\"audio_sample1.wav\"))\njob2 = client.submit(handle_file(\"audio_sample2.wav\"))\njob1.cancel() will return False, assuming the job has started\njob2.cancel() will return True, indicating that the job has been canceled\n```\n\nIf the first job has started processing, then it will not be canceled. If the second job\nhas not yet started, it will be successfully canceled and removed from the queue.\n\n", "heading1": "Cancelling Jobs", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Some Gradio API endpoints do not return a single value, rather they return a series of values. You can get the series of values that have been returned at any time from such a generator endpoint by running `job.outputs()`:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/count_generator\")\njob = client.submit(3, api_name=\"/count\")\nwhile not job.done():\n time.sleep(0.1)\njob.outputs()\n\n>> ['0', '1', '2']\n```\n\nNote that running `job.result()` on a generator endpoint only gives you the _first_ value returned by the endpoint.\n\nThe `Job` object is also iterable, which means you can use it to display the results of a generator function as they are returned from the endpoint. Here's the equivalent example using the `Job` as a generator:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/count_generator\")\njob = client.submit(3, api_name=\"/count\")\n\nfor o in job:\n print(o)\n\n>> 0\n>> 1\n>> 2\n```\n\nYou can also cancel jobs that that have iterative outputs, in which case the job will finish as soon as the current iteration finishes running.\n\n```py\nfrom gradio_client import Client\nimport time\n\nclient = Client(\"abidlabs/test-yield\")\njob = client.submit(\"abcdef\")\ntime.sleep(3)\njob.cancel() job cancels after 2 iterations\n```\n\n", "heading1": "Generator Endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Gradio demos can include [session state](https://www.gradio.app/guides/state-in-blocks), which provides a way for demos to persist information from user interactions within a page session.\n\nFor example, consider the following demo, which maintains a list of words that a user has submitted in a `gr.State` component. When a user submits a new word, it is added to the state, and the number of previous occurrences of that word is displayed:\n\n```python\nimport gradio as gr\n\ndef count(word, list_of_words):\n return list_of_words.count(word), list_of_words + [word]\n\nwith gr.Blocks() as demo:\n words = gr.State([])\n textbox = gr.Textbox()\n number = gr.Number()\n textbox.submit(count, inputs=[textbox, words], outputs=[number, words])\n \ndemo.launch()\n```\n\nIf you were to connect this this Gradio app using the Python Client, you would notice that the API information only shows a single input and output:\n\n```csv\nClient.predict() Usage Info\n---------------------------\nNamed API endpoints: 1\n\n - predict(word, api_name=\"/count\") -> value_31\n Parameters:\n - [Textbox] word: str (required) \n Returns:\n - [Number] value_31: float \n```\n\nThat is because the Python client handles state automatically for you -- as you make a series of requests, the returned state from one request is stored internally and automatically supplied for the subsequent request. If you'd like to reset the state, you can do that by calling `Client.reset_session()`.\n", "heading1": "Demos with Session State", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "You generally don't need to install cURL, as it comes pre-installed on many operating systems. Run:\n\n```bash\ncurl --version\n```\n\nto confirm that `curl` is installed. If it is not already installed, you can install it by visiting https://curl.se/download.html. \n\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "To query a Gradio app, you'll need its full URL. This is usually just the URL that the Gradio app is hosted on, for example: https://bec81a83-5b5c-471e.gradio.live\n\n\n**Hugging Face Spaces**\n\nHowever, if you are querying a Gradio on Hugging Face Spaces, you will need to use the URL of the embedded Gradio app, not the URL of the Space webpage. For example:\n\n```bash\n\u274c Space URL: https://huggingface.co/spaces/abidlabs/en2fr\n\u2705 Gradio app URL: https://abidlabs-en2fr.hf.space/\n```\n\nYou can get the Gradio app URL by clicking the \"view API\" link at the bottom of the page. Or, you can right-click on the page and then click on \"View Frame Source\" or the equivalent in your browser to view the URL of the embedded Gradio app.\n\nWhile you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,\nand then use it to make as many requests as you'd like!\n\nNote: to query private Spaces, you will need to pass in your Hugging Face (HF) token. You can get your HF token here: https://huggingface.co/settings/tokens. In this case, you will need to include an additional header in both of your `curl` calls that we'll discuss below:\n\n```bash\n-H \"Authorization: Bearer $HF_TOKEN\"\n```\n\nNow, we are ready to make the two `curl` requests.\n\n", "heading1": "Step 0: Get the URL for your Gradio App", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "The first of the two `curl` requests is `POST` request that submits the input payload to the Gradio app. \n\nThe syntax of the `POST` request is as follows:\n\n```bash\n$ curl -X POST $URL/call/$API_NAME -H \"Content-Type: application/json\" -d '{\n \"data\": $PAYLOAD\n}'\n```\n\nHere:\n\n* `$URL` is the URL of the Gradio app as obtained in Step 0\n* `$API_NAME` is the name of the API endpoint for the event that you are running. You can get the API endpoint names by clicking the \"view API\" link at the bottom of the page.\n* `$PAYLOAD` is a valid JSON data list containing the input payload, one element for each input component.\n\nWhen you make this `POST` request successfully, you will get an event id that is printed to the terminal in this format:\n\n```bash\n>> {\"event_id\": $EVENT_ID} \n```\n\nThis `EVENT_ID` will be needed in the subsequent `curl` request to fetch the results of the prediction. \n\nHere are some examples of how to make the `POST` request\n\n**Basic Example**\n\nRevisiting the example at the beginning of the page, here is how to make the `POST` request for a simple Gradio application that takes in a single input text component:\n\n```bash\n$ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Hello, my friend.\"] \n}'\n```\n\n**Multiple Input Components**\n\nThis [Gradio demo](https://huggingface.co/spaces/gradio/hello_world_3) accepts three inputs: a string corresponding to the `gr.Textbox`, a boolean value corresponding to the `gr.Checkbox`, and a numerical value corresponding to the `gr.Slider`. Here is the `POST` request:\n\n```bash\ncurl -X POST https://gradio-hello-world-3.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Hello\", true, 5]\n}'\n```\n\n**Private Spaces**\n\nAs mentioned earlier, if you are making a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:\n\n```bash\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "king a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:\n\n```bash\n$ curl -X POST https://private-space.hf.space/call/predict -H \"Content-Type: application/json\" -H \"Authorization: Bearer $HF_TOKEN\" -d '{\n \"data\": [\"Hello, my friend.\"] \n}'\n```\n\n**Files**\n\nIf you are using `curl` to query a Gradio application that requires file inputs, the files *need* to be provided as URLs, and The URL needs to be enclosed in a dictionary in this format:\n\n```bash\n{\"path\": $URL}\n```\n\nHere is an example `POST` request:\n\n```bash\n$ curl -X POST https://gradio-image-mod.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [{\"path\": \"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png\"}] \n}'\n```\n\n\n**Stateful Demos**\n\nIf your Gradio demo [persists user state](/guides/interface-state) across multiple interactions (e.g. is a chatbot), you can pass in a `session_hash` alongside the `data`. Requests with the same `session_hash` are assumed to be part of the same user session. Here's how that might look:\n\n```bash\nThese two requests will share a session\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Are you sentient?\"],\n \"session_hash\": \"randomsequence1234\"\n}'\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Really?\"],\n \"session_hash\": \"randomsequence1234\"\n}'\n\nThis request will be treated as a new session\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Are you sentient?\"],\n \"session_hash\": \"newsequence5678\"\n}'\n```\n\n\n\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "ient?\"],\n \"session_hash\": \"newsequence5678\"\n}'\n```\n\n\n\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "Once you have received the `EVENT_ID` corresponding to your prediction, you can stream the results. Gradio stores these results in a least-recently-used cache in the Gradio app. By default, the cache can store 2,000 results (across all users and endpoints of the app). \n\nTo stream the results for your prediction, make a `GET` request with the following syntax:\n\n```bash\n$ curl -N $URL/call/$API_NAME/$EVENT_ID\n```\n\n\nTip: If you are fetching results from a private Space, include a header with your HF token like this: `-H \"Authorization: Bearer $HF_TOKEN\"` in the `GET` request.\n\nThis should produce a stream of responses in this format:\n\n```bash\nevent: ... \ndata: ...\nevent: ... \ndata: ...\n...\n```\n\nHere: `event` can be one of the following:\n* `generating`: indicating an intermediate result\n* `complete`: indicating that the prediction is complete and the final result \n* `error`: indicating that the prediction was not completed successfully\n* `heartbeat`: sent every 15 seconds to keep the request alive\n\nThe `data` is in the same format as the input payload: valid JSON data list containing the output result, one element for each output component.\n\nHere are some examples of what results you should expect if a request is completed successfully:\n\n**Basic Example**\n\nRevisiting the example at the beginning of the page, we would expect the result to look like this:\n\n```bash\nevent: complete\ndata: [\"Bonjour, mon ami.\"]\n```\n\n**Multiple Outputs**\n\nIf your endpoint returns multiple values, they will appear as elements of the `data` list:\n\n```bash\nevent: complete\ndata: [\"Good morning Hello. It is 5 degrees today\", -15.0]\n```\n\n**Streaming Example**\n\nIf your Gradio app [streams a sequence of values](/guides/streaming-outputs), then they will be streamed directly to your terminal, like this:\n\n```bash\nevent: generating\ndata: [\"Hello, w!\"]\nevent: generating\ndata: [\"Hello, wo!\"]\nevent: generating\ndata: [\"Hello, wor!\"]\nevent: generating\ndata: [\"Hello, worl!\"]\nevent: generating\ndata: [\"Hello, w", "heading1": "Step 2: GET the result", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "```bash\nevent: generating\ndata: [\"Hello, w!\"]\nevent: generating\ndata: [\"Hello, wo!\"]\nevent: generating\ndata: [\"Hello, wor!\"]\nevent: generating\ndata: [\"Hello, worl!\"]\nevent: generating\ndata: [\"Hello, world!\"]\nevent: complete\ndata: [\"Hello, world!\"]\n```\n\n**File Example**\n\nIf your Gradio app returns a file, the file will be represented as a dictionary in this format (including potentially some additional keys):\n\n```python\n{\n \"orig_name\": \"example.jpg\",\n \"path\": \"/path/in/server.jpg\",\n \"url\": \"https:/example.com/example.jpg\",\n \"meta\": {\"_type\": \"gradio.FileData\"}\n}\n```\n\nIn your terminal, it may appear like this:\n\n```bash\nevent: complete\ndata: [{\"path\": \"/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp\", \"url\": \"https://gradio-image-mod.hf.space/c/file=/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp\", \"size\": null, \"orig_name\": \"image.webp\", \"mime_type\": null, \"is_stream\": false, \"meta\": {\"_type\": \"gradio.FileData\"}}]\n```\n\n", "heading1": "Step 2: GET the result", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "What if your Gradio application has [authentication enabled](/guides/sharing-your-appauthentication)? In that case, you'll need to make an additional `POST` request with cURL to authenticate yourself before you make any queries. Here are the complete steps:\n\nFirst, login with a `POST` request supplying a valid username and password:\n\n```bash\ncurl -X POST $URL/login \\\n -d \"username=$USERNAME&password=$PASSWORD\" \\\n -c cookies.txt\n```\n\nIf the credentials are correct, you'll get `{\"success\":true}` in response and the cookies will be saved in `cookies.txt`.\n\nNext, you'll need to include these cookies when you make the original `POST` request, like this:\n\n```bash\n$ curl -X POST $URL/call/$API_NAME -b cookies.txt -H \"Content-Type: application/json\" -d '{\n \"data\": $PAYLOAD\n}'\n```\n\nFinally, you'll need to `GET` the results, again supplying the cookies from the file:\n\n```bash\ncurl -N $URL/call/$API_NAME/$EVENT_ID -b cookies.txt\n```\n", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "Install the @gradio/client package to interact with Gradio APIs using Node.js version >=18.0.0 or in browser-based projects. Use npm or any compatible package manager:\n\n```bash\nnpm i @gradio/client\n```\n\nThis command adds @gradio/client to your project dependencies, allowing you to import it in your JavaScript or TypeScript files.\n\n", "heading1": "Installation via npm", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "For quick addition to your web project, you can use the jsDelivr CDN to load the latest version of @gradio/client directly into your HTML:\n\n```html\n\n```\n\nBe sure to add this to the `` of your HTML. This will install the latest version but we advise hardcoding the version in production. You can find all available versions [here](https://www.jsdelivr.com/package/npm/@gradio/client). This approach is ideal for experimental or prototying purposes, though has some limitations. A complete example would look like this:\n\n```html\n\n\n\n \n\n\n```\n\n", "heading1": "Installation via CDN", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Start by connecting instantiating a `client` instance and connecting it to a Gradio app that is running on Hugging Face Spaces or generally anywhere on the web.\n\n", "heading1": "Connecting to a running Gradio App", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\"); // a Space that translates from English to French\n```\n\nYou can also connect to private Spaces by passing in your HF token with the `token` property of the options parameter. You can get your HF token here: https://huggingface.co/settings/tokens\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/my-private-space\", { token: \"hf_...\" })\n```\n\n", "heading1": "Connecting to a Hugging Face Space", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space, and then use it to make as many requests as you'd like! You'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens)).\n\n`Client.duplicate` is almost identical to `Client.connect`, the only difference is under the hood:\n\n```js\nimport { Client, handle_file } from \"@gradio/client\";\n\nconst response = await fetch(\n\t\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\"\n);\nconst audio_file = await response.blob();\n\nconst app = await Client.duplicate(\"abidlabs/whisper\", { token: \"hf_...\" });\nconst transcription = await app.predict(\"/predict\", [handle_file(audio_file)]);\n```\n\nIf you have previously duplicated a Space, re-running `Client.duplicate` will _not_ create a new Space. Instead, the client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate` method multiple times with the same space.\n\n**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 5 minutes of inactivity. You can also set the hardware using the `hardware` and `timeout` properties of `duplicate`'s options object like this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.duplicate(\"abidlabs/whisper\", {\n\ttoken: \"hf_...\",\n\ttimeout: 60,\n\thardware: \"a10g-small\"\n});\n```\n\n", "heading1": "Duplicating a Space for private use", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If your app is running somewhere else, just provide the full URL instead, including the \"http://\" or \"https://\". Here's an example of making predictions to a Gradio app that is running on a share URL:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = Client.connect(\"https://bec81a83-5b5c-471e.gradio.live\");\n```\n\n", "heading1": "Connecting a general Gradio app", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-appauthentication), then provide them as a tuple to the `auth` argument of the `Client` class:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nClient.connect(\n space_name,\n { auth: [username, password] }\n)\n```\n\n\n", "heading1": "Connecting to a Gradio app with auth", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client`'s `view_api` method.\n\nFor the Whisper Space, we can do this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/whisper\");\n\nconst app_info = await app.view_api();\n\nconsole.log(app_info);\n```\n\nAnd we will see the following:\n\n```json\n{\n\t\"named_endpoints\": {\n\t\t\"/predict\": {\n\t\t\t\"parameters\": [\n\t\t\t\t{\n\t\t\t\t\t\"label\": \"text\",\n\t\t\t\t\t\"component\": \"Textbox\",\n\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t}\n\t\t\t],\n\t\t\t\"returns\": [\n\t\t\t\t{\n\t\t\t\t\t\"label\": \"output\",\n\t\t\t\t\t\"component\": \"Textbox\",\n\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t},\n\t\"unnamed_endpoints\": {}\n}\n```\n\nThis shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `string`, which is a url to a file.\n\nWe should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.\n\n", "heading1": "Inspecting the API endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "As an alternative to running the `.view_api()` method, you can click on the \"Use via API\" link in the footer of the Gradio app, which shows us the same information, along with example usage. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe View API page also includes an \"API Recorder\" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the JS Client.\n\n\n", "heading1": "The \"View API\" Page", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The simplest way to make a prediction is simply to call the `.predict()` method with the appropriate arguments:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst result = await app.predict(\"/predict\", [\"Hello\"]);\n```\n\nIf there are multiple parameters, then you should pass them as an array to `.predict()`, like this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/calculator\");\nconst result = await app.predict(\"/predict\", [4, \"add\", 5]);\n```\n\nFor certain inputs, such as images, you should pass in a `Buffer`, `Blob` or `File` depending on what is most convenient. In node, this would be a `Buffer` or `Blob`; in a browser environment, this would be a `Blob` or `File`.\n\n```js\nimport { Client, handle_file } from \"@gradio/client\";\n\nconst response = await fetch(\n\t\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\"\n);\nconst audio_file = await response.blob();\n\nconst app = await Client.connect(\"abidlabs/whisper\");\nconst result = await app.predict(\"/predict\", [handle_file(audio_file)]);\n```\n\n", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If the API you are working with can return results over time, or you wish to access information about the status of a job, you can use the iterable interface for more flexibility. This is especially useful for iterative endpoints or generator endpoints that will produce a series of values over time as discrete responses.\n\n```js\nimport { Client } from \"@gradio/client\";\n\nfunction log_result(payload) {\n\tconst {\n\t\tdata: [translation]\n\t} = payload;\n\n\tconsole.log(`The translated result is: ${translation}`);\n}\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst job = app.submit(\"/predict\", [\"Hello\"]);\n\nfor await (const message of job) {\n\tlog_result(message);\n}\n```\n\n", "heading1": "Using events", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The event interface also allows you to get the status of the running job by instantiating the client with the `events` options passing `status` and `data` as an array:\n\n\n```ts\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\", {\n\tevents: [\"status\", \"data\"]\n});\n```\n\nThis ensures that status messages are also reported to the client.\n\n`status`es are returned as an object with the following attributes: `status` (a human readbale status of the current job, `\"pending\" | \"generating\" | \"complete\" | \"error\"`), `code` (the detailed gradio code for the job), `position` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` ( as `Date` object detailing the time that the status was generated).\n\n```js\nimport { Client } from \"@gradio/client\";\n\nfunction log_status(status) {\n\tconsole.log(\n\t\t`The current status for this job is: ${JSON.stringify(status, null, 2)}.`\n\t);\n}\n\nconst app = await Client.connect(\"abidlabs/en2fr\", {\n\tevents: [\"status\", \"data\"]\n});\nconst job = app.submit(\"/predict\", [\"Hello\"]);\n\nfor await (const message of job) {\n\tif (message.type === \"status\") {\n\t\tlog_status(message);\n\t}\n}\n```\n\n", "heading1": "Status", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The job instance also has a `.cancel()` method that cancels jobs that have been queued but not started. For example, if you run:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst job_one = app.submit(\"/predict\", [\"Hello\"]);\nconst job_two = app.submit(\"/predict\", [\"Friends\"]);\n\njob_one.cancel();\njob_two.cancel();\n```\n\nIf the first job has started processing, then it will not be canceled but the client will no longer listen for updates (throwing away the job). If the second job has not yet started, it will be successfully canceled and removed from the queue.\n\n", "heading1": "Cancelling Jobs", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Some Gradio API endpoints do not return a single value, rather they return a series of values. You can listen for these values in real time using the iterable interface:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/count_generator\");\nconst job = app.submit(0, [9]);\n\nfor await (const message of job) {\n\tconsole.log(message.data);\n}\n```\n\nThis will log out the values as they are generated by the endpoint.\n\nYou can also cancel jobs that that have iterative outputs, in which case the job will finish immediately.\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/count_generator\");\nconst job = app.submit(0, [9]);\n\nfor await (const message of job) {\n\tconsole.log(message.data);\n}\n\nsetTimeout(() => {\n\tjob.cancel();\n}, 3000);\n```\n", "heading1": "Generator Endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "What are agents?\n\nA [LangChain agent](https://docs.langchain.com/docs/components/agents/agent) is a Large Language Model (LLM) that takes user input and reports an output based on using one of many tools at its disposal.\n\nWhat is Gradio?\n\n[Gradio](https://github.com/gradio-app/gradio) is the defacto standard framework for building Machine Learning Web Applications and sharing them with the world - all with just python! \ud83d\udc0d\n\n", "heading1": "Some background", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "To get started with `gradio_tools`, all you need to do is import and initialize your tools and pass them to the langchain agent!\n\nIn the following example, we import the `StableDiffusionPromptGeneratorTool` to create a good prompt for stable diffusion, the\n`StableDiffusionTool` to create an image with our improved prompt, the `ImageCaptioningTool` to caption the generated image, and\nthe `TextToVideoTool` to create a video from a prompt.\n\nWe then tell our agent to create an image of a dog riding a skateboard, but to please improve our prompt ahead of time. We also ask\nit to caption the generated image and create a video for it. The agent can decide which tool to use without us explicitly telling it.\n\n```python\nimport os\n\nif not os.getenv(\"OPENAI_API_KEY\"):\n raise ValueError(\"OPENAI_API_KEY must be set\")\n\nfrom langchain.agents import initialize_agent\nfrom langchain.llms import OpenAI\nfrom gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,\n TextToVideoTool)\n\nfrom langchain.memory import ConversationBufferMemory\n\nllm = OpenAI(temperature=0)\nmemory = ConversationBufferMemory(memory_key=\"chat_history\")\ntools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,\n StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]\n\n\nagent = initialize_agent(tools, llm, memory=memory, agent=\"conversational-react-description\", verbose=True)\noutput = agent.run(input=(\"Please create a photo of a dog riding a skateboard \"\n \"but improve my prompt prior to using an image generator.\"\n \"Please caption the generated image and create a video for it using the improved prompt.\"))\n```\n\nYou'll note that we are using some pre-built tools that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-toolsgradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.\nIf ", "heading1": "gradio_tools - An end-to-end example", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-toolsgradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.\nIf you would like to use a tool that's not currently in `gradio_tools`, it is very easy to add your own. That's what the next section will cover.\n\n", "heading1": "gradio_tools - An end-to-end example", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "The core abstraction is the `GradioTool`, which lets you define a new tool for your LLM as long as you implement a standard interface:\n\n```python\nclass GradioTool(BaseTool):\n\n def __init__(self, name: str, description: str, src: str) -> None:\n\n @abstractmethod\n def create_job(self, query: str) -> Job:\n pass\n\n @abstractmethod\n def postprocess(self, output: Tuple[Any] | Any) -> str:\n pass\n```\n\nThe requirements are:\n\n1. The name for your tool\n2. The description for your tool. This is crucial! Agents decide which tool to use based on their description. Be precise and be sure to include example of what the input and the output of the tool should look like.\n3. The url or space id, e.g. `freddyaboulton/calculator`, of the Gradio application. Based on this value, `gradio_tool` will create a [gradio client](https://github.com/gradio-app/gradio/blob/main/client/python/README.md) instance to query the upstream application via API. Be sure to click the link and learn more about the gradio client library if you are not familiar with it.\n4. create_job - Given a string, this method should parse that string and return a job from the client. Most times, this is as simple as passing the string to the `submit` function of the client. More info on creating jobs [here](https://github.com/gradio-app/gradio/blob/main/client/python/README.mdmaking-a-prediction)\n5. postprocess - Given the result of the job, convert it to a string the LLM can display to the user.\n6. _Optional_ - Some libraries, e.g. [MiniChain](https://github.com/srush/MiniChain/tree/main), may need some info about the underlying gradio input and output types used by the tool. By default, this will return gr.Textbox() but\n if you'd like to provide more accurate info, implement the `_block_input(self, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be\n automatically imported by the `GradiTool` parent", "heading1": "gradio_tools - creating your own tool", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "lf, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be\n automatically imported by the `GradiTool` parent class and passed to the `_block_input` and `_block_output` methods.\n\nAnd that's it!\n\nOnce you have created your tool, open a pull request to the `gradio_tools` repo! We welcome all contributions.\n\n", "heading1": "gradio_tools - creating your own tool", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "Here is the code for the StableDiffusion tool as an example:\n\n```python\nfrom gradio_tool import GradioTool\nimport os\n\nclass StableDiffusionTool(GradioTool):\n \"\"\"Tool for calling stable diffusion from llm\"\"\"\n\n def __init__(\n self,\n name=\"StableDiffusion\",\n description=(\n \"An image generator. Use this to generate images based on \"\n \"text input. Input should be a description of what the image should \"\n \"look like. The output will be a path to an image file.\"\n ),\n src=\"gradio-client-demos/stable-diffusion\",\n token=None,\n ) -> None:\n super().__init__(name, description, src, token)\n\n def create_job(self, query: str) -> Job:\n return self.client.submit(query, \"\", 9, fn_index=1)\n\n def postprocess(self, output: str) -> str:\n return [os.path.join(output, i) for i in os.listdir(output) if not i.endswith(\"json\")][0]\n\n def _block_input(self, gr) -> \"gr.components.Component\":\n return gr.Textbox()\n\n def _block_output(self, gr) -> \"gr.components.Component\":\n return gr.Image()\n```\n\nSome notes on this implementation:\n\n1. All instances of `GradioTool` have an attribute called `client` that is a pointed to the underlying [gradio client](https://github.com/gradio-app/gradio/tree/main/client/pythongradio_client-use-a-gradio-app-as-an-api----in-3-lines-of-python). That is what you should use\n in the `create_job` method.\n2. `create_job` just passes the query string to the `submit` function of the client with some other parameters hardcoded, i.e. the negative prompt string and the guidance scale. We could modify our tool to also accept these values from the input string in a subsequent version.\n3. The `postprocess` method simply returns the first image from the gallery of images created by the stable diffusion space. We use the `os` module to get the full path of the image.\n\n", "heading1": "Example tool - Stable Diffusion", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "You now know how to extend the abilities of your LLM with the 1000s of gradio spaces running in the wild!\nAgain, we welcome any contributions to the [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library.\nWe're excited to see the tools you all build!\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "Take a look at the demo below.\n\n$code_hello_blocks\n$demo_hello_blocks\n\n- First, note the `with gr.Blocks() as demo:` clause. The Blocks app code will be contained within this clause.\n- Next come the Components. These are the same Components used in `Interface`. However, instead of being passed to some constructor, Components are automatically added to the Blocks as they are created within the `with` clause.\n- Finally, the `click()` event listener. Event listeners define the data flow within the app. In the example above, the listener ties the two Textboxes together. The Textbox `name` acts as the input and Textbox `output` acts as the output to the `greet` method. This dataflow is triggered when the Button `greet_btn` is clicked. Like an Interface, an event listener can take multiple inputs or outputs.\n\nYou can also attach event listeners using decorators - skip the `fn` argument and assign `inputs` and `outputs` directly:\n\n$code_hello_blocks_decorator\n\n", "heading1": "Blocks Structure", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "In the example above, you'll notice that you are able to edit Textbox `name`, but not Textbox `output`. This is because any Component that acts as an input to an event listener is made interactive. However, since Textbox `output` acts only as an output, Gradio determines that it should not be made interactive. You can override the default behavior and directly configure the interactivity of a Component with the boolean `interactive` keyword argument, e.g. `gr.Textbox(interactive=True)`.\n\n```python\noutput = gr.Textbox(label=\"Output\", interactive=True)\n```\n\n_Note_: What happens if a Gradio component is neither an input nor an output? If a component is constructed with a default value, then it is presumed to be displaying content and is rendered non-interactive. Otherwise, it is rendered interactive. Again, this behavior can be overridden by specifying a value for the `interactive` argument.\n\n", "heading1": "Event Listeners and Interactivity", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "Take a look at the demo below:\n\n$code_blocks_hello\n$demo_blocks_hello\n\nInstead of being triggered by a click, the `welcome` function is triggered by typing in the Textbox `inp`. This is due to the `change()` event listener. Different Components support different event listeners. For example, the `Video` Component supports a `play()` event listener, triggered when a user presses play. See the [Docs](http://gradio.app/docscomponents) for the event listeners for each Component.\n\n", "heading1": "Types of Event Listeners", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "A Blocks app is not limited to a single data flow the way Interfaces are. Take a look at the demo below:\n\n$code_reversible_flow\n$demo_reversible_flow\n\nNote that `num1` can act as input to `num2`, and also vice-versa! As your apps get more complex, you will have many data flows connecting various Components.\n\nHere's an example of a \"multi-step\" demo, where the output of one model (a speech-to-text model) gets fed into the next model (a sentiment classifier).\n\n$code_blocks_speech_text_sentiment\n$demo_blocks_speech_text_sentiment\n\n", "heading1": "Multiple Data Flows", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "The event listeners you've seen so far have a single input component. If you'd like to have multiple input components pass data to the function, you have two options on how the function can accept input component values:\n\n1. as a list of arguments, or\n2. as a single dictionary of values, keyed by the component\n\nLet's see an example of each:\n$code_calculator_list_and_dict\n\nBoth `add()` and `sub()` take `a` and `b` as inputs. However, the syntax is different between these listeners.\n\n1. To the `add_btn` listener, we pass the inputs as a list. The function `add()` takes each of these inputs as arguments. The value of `a` maps to the argument `num1`, and the value of `b` maps to the argument `num2`.\n2. To the `sub_btn` listener, we pass the inputs as a set (note the curly brackets!). The function `sub()` takes a single dictionary argument `data`, where the keys are the input components, and the values are the values of those components.\n\nIt is a matter of preference which syntax you prefer! For functions with many input components, option 2 may be easier to manage.\n\n$demo_calculator_list_and_dict\n\n", "heading1": "Function Input List vs Dict", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "Similarly, you may return values for multiple output components either as:\n\n1. a list of values, or\n2. a dictionary keyed by the component\n\nLet's first see an example of (1), where we set the values of two output components by returning two values:\n\n```python\nwith gr.Blocks() as demo:\n food_box = gr.Number(value=10, label=\"Food Count\")\n status_box = gr.Textbox()\n\n def eat(food):\n if food > 0:\n return food - 1, \"full\"\n else:\n return 0, \"hungry\"\n\n gr.Button(\"Eat\").click(\n fn=eat,\n inputs=food_box,\n outputs=[food_box, status_box]\n )\n```\n\nAbove, each return statement returns two values corresponding to `food_box` and `status_box`, respectively.\n\n**Note:** if your event listener has a single output component, you should **not** return it as a single-item list. This will not work, since Gradio does not know whether to interpret that outer list as part of your return value. You should instead just return that value directly.\n\nNow, let's see option (2). Instead of returning a list of values corresponding to each output component in order, you can also return a dictionary, with the key corresponding to the output component and the value as the new value. This also allows you to skip updating some output components.\n\n```python\nwith gr.Blocks() as demo:\n food_box = gr.Number(value=10, label=\"Food Count\")\n status_box = gr.Textbox()\n\n def eat(food):\n if food > 0:\n return {food_box: food - 1, status_box: \"full\"}\n else:\n return {status_box: \"hungry\"}\n\n gr.Button(\"Eat\").click(\n fn=eat,\n inputs=food_box,\n outputs=[food_box, status_box]\n )\n```\n\nNotice how when there is no food, we only update the `status_box` element. We skipped updating the `food_box` component.\n\nDictionary returns are helpful when an event listener affects many components on return, or conditionally affects outputs and not others.\n\nKeep in mind that with dictionary returns,", "heading1": "Function Return List vs Dict", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "d_box` component.\n\nDictionary returns are helpful when an event listener affects many components on return, or conditionally affects outputs and not others.\n\nKeep in mind that with dictionary returns, we still need to specify the possible outputs in the event listener.\n\n", "heading1": "Function Return List vs Dict", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "The return value of an event listener function is usually the updated value of the corresponding output Component. Sometimes we want to update the configuration of the Component as well, such as the visibility. In this case, we return a new Component, setting the properties we want to change.\n\n$code_blocks_essay_simple\n$demo_blocks_essay_simple\n\nSee how we can configure the Textbox itself through a new `gr.Textbox()` method. The `value=` argument can still be used to update the value along with Component configuration. Any arguments we do not set will preserve their previous values.\n\n", "heading1": "Updating Component Configurations", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "In some cases, you may want to leave a component's value unchanged. Gradio includes a special function, `gr.skip()`, which can be returned from your function. Returning this function will keep the output component (or components') values as is. Let us illustrate with an example:\n\n$code_skip\n$demo_skip\n\nNote the difference between returning `None` (which generally resets a component's value to an empty state) versus returning `gr.skip()`, which leaves the component value unchanged.\n\nTip: if you have multiple output components, and you want to leave all of their values unchanged, you can just return a single `gr.skip()` instead of returning a tuple of skips, one for each element.\n\n", "heading1": "Not Changing a Component's Value", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "You can also run events consecutively by using the `then` method of an event listener. This will run an event after the previous event has finished running. This is useful for running events that update components in multiple steps.\n\nFor example, in the chatbot example below, we first update the chatbot with the user message immediately, and then update the chatbot with the computer response after a simulated delay.\n\n$code_chatbot_consecutive\n$demo_chatbot_consecutive\n\nThe `.then()` method of an event listener executes the subsequent event regardless of whether the previous event raised any errors. If you'd like to only run subsequent events if the previous event executed successfully, use the `.success()` method, which takes the same arguments as `.then()`. Conversely, if you'd like to only run subsequent events if the previous event failed (i.e., raised an error), use the `.failure()` method. This is particularly useful for error handling workflows, such as displaying error messages or restoring previous states when an operation fails.\n\n", "heading1": "Running Events Consecutively", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "Often times, you may want to bind multiple triggers to the same function. For example, you may want to allow a user to click a submit button, or press enter to submit a form. You can do this using the `gr.on` method and passing a list of triggers to the `trigger`.\n\n$code_on_listener_basic\n$demo_on_listener_basic\n\nYou can use decorator syntax as well:\n\n$code_on_listener_decorator\n\nYou can use `gr.on` to create \"live\" events by binding to the `change` event of components that implement it. If you do not specify any triggers, the function will automatically bind to all `change` event of all input components that include a `change` event (for example `gr.Textbox` has a `change` event whereas `gr.Button` does not).\n\n$code_on_listener_live\n$demo_on_listener_live\n\nYou can follow `gr.on` with `.then`, just like any regular event listener. This handy method should save you from having to write a lot of repetitive code!\n\n", "heading1": "Binding Multiple Triggers to a Function", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "If you want to set a Component's value to always be a function of the value of other Components, you can use the following shorthand:\n\n```python\nwith gr.Blocks() as demo:\n num1 = gr.Number()\n num2 = gr.Number()\n product = gr.Number(lambda a, b: a * b, inputs=[num1, num2])\n```\n\nThis functionally the same as:\n```python\nwith gr.Blocks() as demo:\n num1 = gr.Number()\n num2 = gr.Number()\n product = gr.Number()\n\n gr.on(\n [num1.change, num2.change, demo.load], \n lambda a, b: a * b, \n inputs=[num1, num2], \n outputs=product\n )\n```\n", "heading1": "Binding a Component Value Directly to a Function of Other Components", "source_page_url": "https://gradio.app/guides/blocks-and-event-listeners", "source_page_title": "Building With Blocks - Blocks And Event Listeners Guide"}, {"text": "Global state in Gradio apps is very simple: any variable created outside of a function is shared globally between all users.\n\nThis makes managing global state very simple and without the need for external services. For example, in this application, the `visitor_count` variable is shared between all users\n\n```py\nimport gradio as gr\n\nShared between all users\nvisitor_count = 0\n\ndef increment_counter():\n global visitor_count\n visitor_count += 1\n return visitor_count\n\nwith gr.Blocks() as demo: \n number = gr.Textbox(label=\"Total Visitors\", value=\"Counting...\")\n demo.load(increment_counter, inputs=None, outputs=number)\n\ndemo.launch()\n```\n\nThis means that any time you do _not_ want to share a value between users, you should declare it _within_ a function. But what if you need to share values between function calls, e.g. a chat history? In that case, you should use one of the subsequent approaches to manage state.\n\n", "heading1": "Global State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": "Gradio supports session state, where data persists across multiple submits within a page session. To reiterate, session data is _not_ shared between different users of your model, and does _not_ persist if a user refreshes the page to reload the Gradio app. To store data in a session state, you need to do three things:\n\n1. Create a `gr.State()` object. If there is a default value to this stateful object, pass that into the constructor. Note that `gr.State` objects must be [deepcopy-able](https://docs.python.org/3/library/copy.html), otherwise you will need to use a different approach as described below.\n2. In the event listener, put the `State` object as an input and output as needed.\n3. In the event listener function, add the variable to the input parameters and the return value.\n\nLet's take a look at a simple example. We have a simple checkout app below where you add items to a cart. You can also see the size of the cart.\n\n$code_simple_state\n\nNotice how we do this with state:\n\n1. We store the cart items in a `gr.State()` object, initialized here to be an empty list.\n2. When adding items to the cart, the event listener uses the cart as both input and output - it returns the updated cart with all the items inside. \n3. We can attach a `.change` listener to cart, that uses the state variable as input as well.\n\nYou can think of `gr.State` as an invisible Gradio component that can store any kind of value. Here, `cart` is not visible in the frontend but is used for calculations.\n\nThe `.change` listener for a state variable triggers after any event listener changes the value of a state variable. If the state variable holds a sequence (like a `list`, `set`, or `dict`), a change is triggered if any of the elements inside change. If it holds an object or primitive, a change is triggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__", "heading1": "Session State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": "riggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__hash__` implementation.\n\nThe value of a session State variable is cleared when the user refreshes the page. The value is stored on in the app backend for 60 minutes after the user closes the tab (this can be configured by the `delete_cache` parameter in `gr.Blocks`).\n\nLearn more about `State` in the [docs](https://gradio.app/docs/gradio/state).\n\n**What about objects that cannot be deepcopied?**\n\nAs mentioned earlier, the value stored in `gr.State` must be [deepcopy-able](https://docs.python.org/3/library/copy.html). If you are working with a complex object that cannot be deepcopied, you can take a different approach to manually read the user's `session_hash` and store a global `dictionary` with instances of your object for each user. Here's how you would do that:\n\n```py\nimport gradio as gr\n\nclass NonDeepCopyable:\n def __init__(self):\n from threading import Lock\n self.counter = 0\n self.lock = Lock() Lock objects cannot be deepcopied\n \n def increment(self):\n with self.lock:\n self.counter += 1\n return self.counter\n\nGlobal dictionary to store user-specific instances\ninstances = {}\n\ndef initialize_instance(request: gr.Request):\n instances[request.session_hash] = NonDeepCopyable()\n return \"Session initialized!\"\n\ndef cleanup_instance(request: gr.Request):\n if request.session_hash in instances:\n del instances[request.session_hash]\n\ndef increment_counter(request: gr.Request):\n if request.session_hash in instances:\n instance = instances[request.session_hash]\n return instance.increment()\n return \"Error: Session not initialized\"\n\nwith gr.Blocks() as demo:\n output = gr.Textbox(label=\"Status\")\n counter = gr.Number(label=\"Counter Value\")\n increment_btn = gr.Button(\"Increment Co", "heading1": "Session State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": " return \"Error: Session not initialized\"\n\nwith gr.Blocks() as demo:\n output = gr.Textbox(label=\"Status\")\n counter = gr.Number(label=\"Counter Value\")\n increment_btn = gr.Button(\"Increment Counter\")\n increment_btn.click(increment_counter, inputs=None, outputs=counter)\n \n Initialize instance when page loads\n demo.load(initialize_instance, inputs=None, outputs=output) \n Clean up instance when page is closed/refreshed\n demo.unload(cleanup_instance) \n\ndemo.launch()\n```\n\n", "heading1": "Session State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": "Gradio also supports browser state, where data persists in the browser's localStorage even after the page is refreshed or closed. This is useful for storing user preferences, settings, API keys, or other data that should persist across sessions. To use local state:\n\n1. Create a `gr.BrowserState` object. You can optionally provide an initial default value and a key to identify the data in the browser's localStorage.\n2. Use it like a regular `gr.State` component in event listeners as inputs and outputs.\n\nHere's a simple example that saves a user's username and password across sessions:\n\n$code_browserstate\n\nNote: The value stored in `gr.BrowserState` does not persist if the Grado app is restarted. To persist it, you can hardcode specific values of `storage_key` and `secret` in the `gr.BrowserState` component and restart the Gradio app on the same server name and server port. However, this should only be done if you are running trusted Gradio apps, as in principle, this can allow one Gradio app to access localStorage data that was created by a different Gradio app.\n", "heading1": "Browser State", "source_page_url": "https://gradio.app/guides/state-in-blocks", "source_page_title": "Building With Blocks - State In Blocks Guide"}, {"text": "Elements within a `with gr.Row` clause will all be displayed horizontally. For example, to display two Buttons side by side:\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n btn1 = gr.Button(\"Button 1\")\n btn2 = gr.Button(\"Button 2\")\n```\n\nYou can set every element in a Row to have the same height. Configure this with the `equal_height` argument.\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row(equal_height=True):\n textbox = gr.Textbox()\n btn2 = gr.Button(\"Button 2\")\n```\n\nThe widths of elements in a Row can be controlled via a combination of `scale` and `min_width` arguments that are present in every Component.\n\n- `scale` is an integer that defines how an element will take up space in a Row. If scale is set to `0`, the element will not expand to take up space. If scale is set to `1` or greater, the element will expand. Multiple elements in a row will expand proportional to their scale. Below, `btn2` will expand twice as much as `btn1`, while `btn0` will not expand at all:\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n btn0 = gr.Button(\"Button 0\", scale=0)\n btn1 = gr.Button(\"Button 1\", scale=1)\n btn2 = gr.Button(\"Button 2\", scale=2)\n```\n\n- `min_width` will set the minimum width the element will take. The Row will wrap if there isn't sufficient space to satisfy all `min_width` values.\n\nLearn more about Rows in the [docs](https://gradio.app/docs/row).\n\n", "heading1": "Rows", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:\n\n$code_rows_and_columns\n$demo_rows_and_columns\n\nSee how the first column has two Textboxes arranged vertically. The second column has an Image and Button arranged vertically. Notice how the relative widths of the two columns is set by the `scale` parameter. The column with twice the `scale` value takes up twice the width.\n\nLearn more about Columns in the [docs](https://gradio.app/docs/column).\n\nFill Browser Height / Width\n\nTo make an app take the full width of the browser by removing the side padding, use `gr.Blocks(fill_width=True)`. \n\nTo make top level Components expand to take the full height of the browser, use `fill_height` and apply scale to the expanding Components.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks(fill_height=True) as demo:\n gr.Chatbot(scale=1)\n gr.Textbox(scale=0)\n```\n\n", "heading1": "Columns and Nesting", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "Some components support setting height and width. These parameters accept either a number (interpreted as pixels) or a string. Using a string allows the direct application of any CSS unit to the encapsulating Block element.\n\nBelow is an example illustrating the use of viewport width (vw):\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n im = gr.ImageEditor(width=\"50vw\")\n\ndemo.launch()\n```\n\n", "heading1": "Dimensions", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "You can also create Tabs using the `with gr.Tab('tab_name'):` clause. Any component created inside of a `with gr.Tab('tab_name'):` context appears in that tab. Consecutive Tab clauses are grouped together so that a single tab can be selected at one time, and only the components within that Tab's context are shown.\n\nFor example:\n\n$code_blocks_flipper\n$demo_blocks_flipper\n\nAlso note the `gr.Accordion('label')` in this example. The Accordion is a layout that can be toggled open or closed. Like `Tabs`, it is a layout element that can selectively hide or show content. Any components that are defined inside of a `with gr.Accordion('label'):` will be hidden or shown when the accordion's toggle icon is clicked.\n\nLearn more about [Tabs](https://gradio.app/docs/tab) and [Accordions](https://gradio.app/docs/accordion) in the docs.\n\n", "heading1": "Tabs and Accordions", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "The sidebar is a collapsible panel that renders child components on the left side of the screen and can be expanded or collapsed.\n\nFor example:\n\n$code_blocks_sidebar\n\nLearn more about [Sidebar](https://gradio.app/docs/gradio/sidebar) in the docs.\n\n\n", "heading1": "Sidebar", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "In order to provide a guided set of ordered steps, a controlled workflow, you can use the `Walkthrough` component with accompanying `Step` components.\n\nThe `Walkthrough` component has a visual style and user experience tailored for this usecase.\n\nAuthoring this component is very similar to `Tab`, except it is the app developers responsibility to progress through each step, by setting the appropriate ID for the parent `Walkthrough` which should correspond to an ID provided to an indvidual `Step`. \n\n$demo_walkthrough\n\nLearn more about [Walkthrough](https://gradio.app/docs/gradio/walkthrough) in the docs.\n\n\n", "heading1": "Multi-step walkthroughs", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "Both Components and Layout elements have a `visible` argument that can set initially and also updated. Setting `gr.Column(visible=...)` on a Column can be used to show or hide a set of Components.\n\n$code_blocks_form\n$demo_blocks_form\n\n", "heading1": "Visibility", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "In some cases, you might want to define components before you actually render them in your UI. For instance, you might want to show an examples section using `gr.Examples` above the corresponding `gr.Textbox` input. Since `gr.Examples` requires as a parameter the input component object, you will need to first define the input component, but then render it later, after you have defined the `gr.Examples` object.\n\nThe solution to this is to define the `gr.Textbox` outside of the `gr.Blocks()` scope and use the component's `.render()` method wherever you'd like it placed in the UI.\n\nHere's a full code example:\n\n```python\ninput_textbox = gr.Textbox()\n\nwith gr.Blocks() as demo:\n gr.Examples([\"hello\", \"bonjour\", \"merhaba\"], input_textbox)\n input_textbox.render()\n```\n\nSimilarly, if you have already defined a component in a Gradio app, but wish to unrender it so that you can define in a different part of your application, then you can call the `.unrender()` method. In the following example, the `Textbox` will appear in the third column:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n gr.Markdown(\"Row 1\")\n textbox = gr.Textbox()\n with gr.Column():\n gr.Markdown(\"Row 2\")\n textbox.unrender()\n with gr.Column():\n gr.Markdown(\"Row 3\")\n textbox.render()\n\ndemo.launch()\n```\n\n", "heading1": "Defining and Rendering Components Separately", "source_page_url": "https://gradio.app/guides/controlling-layout", "source_page_title": "Building With Blocks - Controlling Layout Guide"}, {"text": "Did you know that apart from being a full-stack machine learning demo, a Gradio Blocks app is also a regular-old python function!?\n\nThis means that if you have a gradio Blocks (or Interface) app called `demo`, you can use `demo` like you would any python function.\n\nSo doing something like `output = demo(\"Hello\", \"friend\")` will run the first event defined in `demo` on the inputs \"Hello\" and \"friend\" and store it\nin the variable `output`.\n\nIf I put you to sleep \ud83e\udd71, please bear with me! By using apps like functions, you can seamlessly compose Gradio apps.\nThe following section will show how.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/using-blocks-like-functions", "source_page_title": "Building With Blocks - Using Blocks Like Functions Guide"}, {"text": "Let's say we have the following demo that translates english text to german text.\n\n$code_english_translator\n\nI already went ahead and hosted it in Hugging Face spaces at [gradio/english_translator](https://huggingface.co/spaces/gradio/english_translator).\n\nYou can see the demo below as well:\n\n$demo_english_translator\n\nNow, let's say you have an app that generates english text, but you wanted to additionally generate german text.\n\nYou could either:\n\n1. Copy the source code of my english-to-german translation and paste it in your app.\n\n2. Load my english-to-german translation in your app and treat it like a normal python function.\n\nOption 1 technically always works, but it often introduces unwanted complexity.\n\nOption 2 lets you borrow the functionality you want without tightly coupling our apps.\n\nAll you have to do is call the `Blocks.load` class method in your source file.\nAfter that, you can use my translation app like a regular python function!\n\nThe following code snippet and demo shows how to use `Blocks.load`.\n\nNote that the variable `english_translator` is my english to german app, but its used in `generate_text` like a regular function.\n\n$code_generate_english_german\n\n$demo_generate_english_german\n\n", "heading1": "Treating Blocks like functions", "source_page_url": "https://gradio.app/guides/using-blocks-like-functions", "source_page_title": "Building With Blocks - Using Blocks Like Functions Guide"}, {"text": "If the app you are loading defines more than one function, you can specify which function to use\nwith the `fn_index` and `api_name` parameters.\n\nIn the code for our english to german demo, you'll see the following line:\n\n```python\ntranslate_btn.click(translate, inputs=english, outputs=german, api_name=\"translate-to-german\")\n```\n\nThe `api_name` gives this function a unique name in our app. You can use this name to tell gradio which\nfunction in the upstream space you want to use:\n\n```python\nenglish_generator(text, api_name=\"translate-to-german\")[0][\"generated_text\"]\n```\n\nYou can also use the `fn_index` parameter.\nImagine my app also defined an english to spanish translation function.\nIn order to use it in our text generation app, we would use the following code:\n\n```python\nenglish_generator(text, fn_index=1)[0][\"generated_text\"]\n```\n\nFunctions in gradio spaces are zero-indexed, so since the spanish translator would be the second function in my space,\nyou would use index 1.\n\n", "heading1": "How to control which function in the app to use", "source_page_url": "https://gradio.app/guides/using-blocks-like-functions", "source_page_title": "Building With Blocks - Using Blocks Like Functions Guide"}, {"text": "We showed how treating a Blocks app like a regular python helps you compose functionality across different apps.\nAny Blocks app can be treated like a function, but a powerful pattern is to `load` an app hosted on\n[Hugging Face Spaces](https://huggingface.co/spaces) prior to treating it like a function in your own app.\nYou can also load models hosted on the [Hugging Face Model Hub](https://huggingface.co/models) - see the [Using Hugging Face Integrations](/using_hugging_face_integrations) guide for an example.\n\nHappy building! \u2692\ufe0f\n", "heading1": "Parting Remarks", "source_page_url": "https://gradio.app/guides/using-blocks-like-functions", "source_page_title": "Building With Blocks - Using Blocks Like Functions Guide"}, {"text": "The `gr.HTML` component can also be used to create custom input components by triggering events. You will provide `js_on_load`, javascript code that runs when the component loads. The code has access to the `trigger` function to trigger events that Gradio can listen to, and the object `props` which has access to all the props of the component, including `value`.\n\n$code_star_rating_events\n$demo_star_rating_events\n\nTake a look at the `js_on_load` code above. We add click event listeners to each star image to update the value via `props.value` when a star is clicked. This also re-renders the template to show the updated value. We also add a click event listener to the submit button that triggers the `submit` event. In our app, we listen to this trigger to run a function that outputs the `value` of the star rating.\n\nYou can update any other props of the component via `props.`, and trigger events via `trigger('')`. The trigger event can also be send event data, e.g.\n\n```js\ntrigger('event_name', { key: value, count: 123 });\n```\n\nThis event data will be accessible the Python event listener functions via gr.EventData.\n\n```python\ndef handle_event(evt: gr.EventData):\n print(evt.key)\n print(evt.count)\n\nstar_rating.event(fn=handle_event, inputs=[], outputs=[])\n```\n\nKeep in mind that event listeners attached in `js_on_load` are only attached once when the component is first rendered. If your component creates new elements dynamically that need event listeners, attach the event listener to a parent element that exists when the component loads, and check for the target. For example:\n\n```js\nelement.addEventListener('click', (e) =>\n if (e.target && e.target.matches('.child-element')) {\n props.value = e.target.dataset.value;\n }\n);\n```\n\n", "heading1": "Triggering Events and Custom Input Components", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "If you are reusing the same HTML component in multiple places, you can create a custom component class by subclassing `gr.HTML` and setting default values for the templates and other arguments. Here's an example of creating a reusable StarRating component.\n\n$code_star_rating_component\n$demo_star_rating_component\n\nNote: Gradio requires all components to accept certain arguments, such as `render`. You do not need\nto handle these arguments, but you do need to accept them in your component constructor and pass\nthem to the parent `gr.HTML` class. Otherwise, your component may not behave correctly. The easiest\nway is to add `**kwargs` to your `__init__` method and pass it to `super().__init__()`, just like in the code example above.\n\nWe've created several custom HTML components as reusable components as examples you can reference in [this directory](https://github.com/gradio-app/gradio/tree/main/gradio/components/custom_html_components).\n\nAPI / MCP support\n\nTo make your custom HTML component work with Gradio's built-in support for API and MCP (Model Context Protocol) usage, you need to define how its data should be serialized. There are two ways to do this:\n\n**Option 1: Define an `api_info()` method**\n\nAdd an `api_info()` method that returns a JSON schema dictionary describing your component's data format. This is what we do in the StarRating class above.\n\n**Option 2: Define a Pydantic data model**\n\nFor more complex data structures, you can define a Pydantic model that inherits from `GradioModel` or `GradioRootModel`:\n\n```python\nfrom gradio.data_classes import GradioModel, GradioRootModel\n\nclass MyComponentData(GradioModel):\n items: List[str]\n count: int\n\nclass MyComponent(gr.HTML):\n data_model = MyComponentData\n```\n\nUse `GradioModel` when your data is a dictionary with named fields, or `GradioRootModel` when your data is a simple type (string, list, etc.) that doesn't need to be wrapped in a dictionary. By defining a `data_model`, your component automaticall", "heading1": "Component Classes", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "ry with named fields, or `GradioRootModel` when your data is a simple type (string, list, etc.) that doesn't need to be wrapped in a dictionary. By defining a `data_model`, your component automatically implements API methods.\n\n", "heading1": "Component Classes", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "Keep in mind that using `gr.HTML` to create custom components involves injecting raw HTML and JavaScript into your Gradio app. Be cautious about using untrusted user input into `html_template` and `js_on_load`, as this could lead to cross-site scripting (XSS) vulnerabilities. \n\nYou should also expect that any Python event listeners that take your `gr.HTML` component as input could have any arbitrary value passed to them, not just the values you expect the frontend to be able to set for `value`. Sanitize and validate user input appropriately in public applications.\n\n", "heading1": "Security Considerations", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "Check out some examples of custom components that you can build in [this directory](https://github.com/gradio-app/gradio/tree/main/gradio/components/custom_html_components).", "heading1": "Next Steps", "source_page_url": "https://gradio.app/guides/custom_HTML_components", "source_page_title": "Building With Blocks - Custom_Html_Components Guide"}, {"text": "Gradio themes are the easiest way to customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `launch()` method of the `Blocks` constructor. For example:\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(theme=gr.themes.Glass())\n ...\n```\n\nGradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. You can extend these themes or create your own themes from scratch - see the [Theming guide](/guides/theming-guide) for more details.\n\nFor additional styling ability, you can pass any CSS to your app as a string using the `css=` kwarg in the `launch()` method. You can also pass a pathlib.Path to a css file or a list of such paths to the `css_paths=` kwarg in the `launch()` method.\n\n**Warning**: The use of query selectors in custom JS and CSS is _not_ guaranteed to work across Gradio versions that bind to Gradio's own HTML elements as the Gradio HTML DOM may change. We recommend using query selectors sparingly.\n\nThe base class for the Gradio app is `gradio-container`, so here's an example that changes the background color of the Gradio app:\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(css=\".gradio-container {background-color: red}\")\n ...\n```\n\nIf you'd like to reference external files in your css, preface the file path (which can be a relative or absolute path) with `\"/gradio_api/file=\"`, for example:\n\n```python\nwith gr.Blocks() as demo:\n ... your code here\ndemo.launch(css=\".gradio-container {background: url('/gradio_api/file=clouds.jpg')}\")\n ...\n```\n\nNote: By default, most files in the host machine are not accessible to users running the Gradio app. As a result, you should make sure that any referenced files (such as `clouds.jpg` here) are either URLs or [allowed paths, as described here](/main/guides/file-access).\n\n\n", "heading1": "Adding custom CSS to your demo", "source_page_url": "https://gradio.app/guides/custom-CSS-and-JS", "source_page_title": "Building With Blocks - Custom Css And Js Guide"}, {"text": "You can `elem_id` to add an HTML element `id` to any component, and `elem_classes` to add a class or list of classes. This will allow you to select elements more easily with CSS. This approach is also more likely to be stable across Gradio versions as built-in class names or ids may change (however, as mentioned in the warning above, we cannot guarantee complete compatibility between Gradio versions if you use custom CSS as the DOM elements may themselves change).\n\n```python\ncss = \"\"\"\nwarning {background-color: FFCCCB}\n.feedback textarea {font-size: 24px !important}\n\"\"\"\n\nwith gr.Blocks() as demo:\n box1 = gr.Textbox(value=\"Good Job\", elem_classes=\"feedback\")\n box2 = gr.Textbox(value=\"Failure\", elem_id=\"warning\", elem_classes=\"feedback\")\ndemo.launch(css=css)\n```\n\nThe CSS `warning` ruleset will only target the second Textbox, while the `.feedback` ruleset will target both. Note that when targeting classes, you might need to put the `!important` selector to override the default Gradio styles.\n\n", "heading1": "The `elem_id` and `elem_classes` Arguments", "source_page_url": "https://gradio.app/guides/custom-CSS-and-JS", "source_page_title": "Building With Blocks - Custom Css And Js Guide"}, {"text": "There are 3 ways to add javascript code to your Gradio demo:\n\n1. You can add JavaScript code as a string to the `js` parameter of the `Blocks` or `Interface` initializer. This will run the JavaScript code when the demo is first loaded.\n\nBelow is an example of adding custom js to show an animated welcome message when the demo first loads.\n\n$code_blocks_js_load\n$demo_blocks_js_load\n\n\n2. When using `Blocks` and event listeners, events have a `js` argument that can take a JavaScript function as a string and treat it just like a Python event listener function. You can pass both a JavaScript function and a Python function (in which case the JavaScript function is run first) or only Javascript (and set the Python `fn` to `None`). Take a look at the code below:\n \n$code_blocks_js_methods\n$demo_blocks_js_methods\n\n3. Lastly, you can add JavaScript code to the `head` param of the `Blocks` initializer. This will add the code to the head of the HTML document. For example, you can add Google Analytics to your demo like so:\n\n\n```python\nhead = f\"\"\"\n\n\n\"\"\"\n\nwith gr.Blocks() as demo:\n gr.HTML(\"

My App

\")\n\ndemo.launch(head=head)\n```\n\nThe `head` parameter accepts any HTML tags you would normally insert into the `` of a page. For example, you can also include `` tags to `head` in order to update the social sharing preview for your Gradio app like this:\n\n```py\nimport gradio as gr\n\ncustom_head = \"\"\"\n\nSample App\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\"\"\"\n\nwith gr.Blocks(title=\"My App\") as demo:\n gr.HTML(\"

My App

\")\n\ndemo.launch(head=custom_head)\n```\n\n\n\nNote that injecting custom JS can affect browser behavior and accessibility (e.g. keyboard shortcuts may be lead to unexpected behavior if your Gradio app is embedded in another webpage). You should test your interface across different browsers and be mindful of how scripts may interact with browser defaults. Here's an example where pressing `Shift + s` triggers the `click` event of a specific `Button` component if the browser focus is _not_ on an input component (e.g. `Textbox` component):\n\n```python\nimport gradio as gr\n\nshortcut_js = \"\"\"\n\n\"\"\"\n\nwith gr.Blocks() as demo:\n action_button = gr.Button(value=\"Name\", elem_id=\"my_btn\")\n textbox = gr.Textbox()\n action_button.click(lambda : \"button pressed\", None, textbox)\n \ndemo.launch(head=shortcut_js)\n```\n\n", "heading1": "Adding custom JavaScript to your demo", "source_page_url": "https://gradio.app/guides/custom-CSS-and-JS", "source_page_title": "Building With Blocks - Custom Css And Js Guide"}, {"text": "In the example below, we will create a variable number of Textboxes. When the user edits the input Textbox, we create a Textbox for each letter in the input. Try it out below:\n\n$code_render_split_simple\n$demo_render_split_simple\n\nSee how we can now create a variable number of Textboxes using our custom logic - in this case, a simple `for` loop. The `@gr.render` decorator enables this with the following steps:\n\n1. Create a function and attach the @gr.render decorator to it.\n2. Add the input components to the `inputs=` argument of @gr.render, and create a corresponding argument in your function for each component. This function will automatically re-run on any change to a component.\n3. Add all components inside the function that you want to render based on the inputs.\n\nNow whenever the inputs change, the function re-runs, and replaces the components created from the previous function run with the latest run. Pretty straightforward! Let's add a little more complexity to this app:\n\n$code_render_split\n$demo_render_split\n\nBy default, `@gr.render` re-runs are triggered by the `.load` listener to the app and the `.change` listener to any input component provided. We can override this by explicitly setting the triggers in the decorator, as we have in this app to only trigger on `input_text.submit` instead. \nIf you are setting custom triggers, and you also want an automatic render at the start of the app, make sure to add `demo.load` to your list of triggers.\n\n", "heading1": "Dynamic Number of Components", "source_page_url": "https://gradio.app/guides/dynamic-apps-with-render-decorator", "source_page_title": "Building With Blocks - Dynamic Apps With Render Decorator Guide"}, {"text": "If you're creating components, you probably want to attach event listeners to them as well. Let's take a look at an example that takes in a variable number of Textbox as input, and merges all the text into a single box.\n\n$code_render_merge_simple\n$demo_render_merge_simple\n\nLet's take a look at what's happening here:\n\n1. The state variable `text_count` is keeping track of the number of Textboxes to create. By clicking on the Add button, we increase `text_count` which triggers the render decorator.\n2. Note that in every single Textbox we create in the render function, we explicitly set a `key=` argument. This key allows us to preserve the value of this Component between re-renders. If you type in a value in a textbox, and then click the Add button, all the Textboxes re-render, but their values aren't cleared because the `key=` maintains the the value of a Component across a render.\n3. We've stored the Textboxes created in a list, and provide this list as input to the merge button event listener. Note that **all event listeners that use Components created inside a render function must also be defined inside that render function**. The event listener can still reference Components outside the render function, as we do here by referencing `merge_btn` and `output` which are both defined outside the render function.\n\nJust as with Components, whenever a function re-renders, the event listeners created from the previous render are cleared and the new event listeners from the latest run are attached. \n\nThis allows us to create highly customizable and complex interactions! \n\n", "heading1": "Dynamic Event Listeners", "source_page_url": "https://gradio.app/guides/dynamic-apps-with-render-decorator", "source_page_title": "Building With Blocks - Dynamic Apps With Render Decorator Guide"}, {"text": "The `key=` argument is used to let Gradio know that the same component is being generated when your render function re-runs. This does two things:\n\n1. The same element in the browser is re-used from the previous render for this Component. This gives browser performance gains - as there's no need to destroy and rebuild a component on a render - and preserves any browser attributes that the Component may have had. If your Component is nested within layout items like `gr.Row`, make sure they are keyed as well because the keys of the parents must also match.\n2. Properties that may be changed by the user or by other event listeners are preserved. By default, only the \"value\" of Component is preserved, but you can specify any list of properties to preserve using the `preserved_by_key=` kwarg.\n\nSee the example below:\n\n$code_render_preserve_key\n$demo_render_preserve_key\n\nYou'll see in this example, when you change the `number_of_boxes` slider, there's a new re-render to update the number of box rows. If you click the \"Change Label\" buttons, they change the `label` and `info` properties of the corresponding textbox. You can also enter text in any textbox to change its value. If you change number of boxes after this, the re-renders \"reset\" the `info`, but the `label` and any entered `value` is still preserved.\n\nNote you can also key any event listener, e.g. `button.click(key=...)` if the same listener is being recreated with the same inputs and outputs across renders. This gives performance benefits, and also prevents errors from occurring if an event was triggered in a previous render, then a re-render occurs, and then the previous event finishes processing. By keying your listener, Gradio knows where to send the data properly. \n\n", "heading1": "Closer Look at `keys=` parameter", "source_page_url": "https://gradio.app/guides/dynamic-apps-with-render-decorator", "source_page_title": "Building With Blocks - Dynamic Apps With Render Decorator Guide"}, {"text": "Let's look at two examples that use all the features above. First, try out the to-do list app below: \n\n$code_todo_list\n$demo_todo_list\n\nNote that almost the entire app is inside a single `gr.render` that reacts to the tasks `gr.State` variable. This variable is a nested list, which presents some complexity. If you design a `gr.render` to react to a list or dict structure, ensure you do the following:\n\n1. Any event listener that modifies a state variable in a manner that should trigger a re-render must set the state variable as an output. This lets Gradio know to check if the variable has changed behind the scenes. \n2. In a `gr.render`, if a variable in a loop is used inside an event listener function, that variable should be \"frozen\" via setting it to itself as a default argument in the function header. See how we have `task=task` in both `mark_done` and `delete`. This freezes the variable to its \"loop-time\" value.\n\nLet's take a look at one last example that uses everything we learned. Below is an audio mixer. Provide multiple audio tracks and mix them together.\n\n$code_audio_mixer\n$demo_audio_mixer\n\nTwo things to note in this app:\n1. Here we provide `key=` to all the components! We need to do this so that if we add another track after setting the values for an existing track, our input values to the existing track do not get reset on re-render.\n2. When there are lots of components of different types and arbitrary counts passed to an event listener, it is easier to use the set and dictionary notation for inputs rather than list notation. Above, we make one large set of all the input `gr.Audio` and `gr.Slider` components when we pass the inputs to the `merge` function. In the function body we query the component values as a dict.\n\nThe `gr.render` expands gradio capabilities extensively - see what you can make out of it! \n", "heading1": "Putting it Together", "source_page_url": "https://gradio.app/guides/dynamic-apps-with-render-decorator", "source_page_title": "Building With Blocks - Dynamic Apps With Render Decorator Guide"}, {"text": "Use any of the standard Gradio form components to filter your data. You can do this via event listeners or function-as-value syntax. Let's look at the event listener approach first:\n\n$code_plot_guide_filters_events\n$demo_plot_guide_filters_events\n\nAnd this would be the function-as-value approach for the same demo.\n\n$code_plot_guide_filters\n\n", "heading1": "Filters", "source_page_url": "https://gradio.app/guides/filters-tables-and-stats", "source_page_title": "Data Science And Plots - Filters Tables And Stats Guide"}, {"text": "Add `gr.DataFrame` and `gr.Label` to your dashboard for some hard numbers.\n\n$code_plot_guide_tables_stats\n$demo_plot_guide_tables_stats\n", "heading1": "Tables and Stats", "source_page_url": "https://gradio.app/guides/filters-tables-and-stats", "source_page_title": "Data Science And Plots - Filters Tables And Stats Guide"}, {"text": "```python\nfrom sqlalchemy import create_engine\nimport pandas as pd\n\nengine = create_engine('sqlite:///your_database.db')\n\nwith gr.Blocks() as demo:\n gr.LinePlot(pd.read_sql_query(\"SELECT time, price from flight_info;\", engine), x=\"time\", y=\"price\")\n```\n\nLet's see a a more interactive plot involving filters that modify your SQL query:\n\n```python\nfrom sqlalchemy import create_engine\nimport pandas as pd\n\nengine = create_engine('sqlite:///your_database.db')\n\nwith gr.Blocks() as demo:\n origin = gr.Dropdown([\"DFW\", \"DAL\", \"HOU\"], value=\"DFW\", label=\"Origin\")\n\n gr.LinePlot(lambda origin: pd.read_sql_query(f\"SELECT time, price from flight_info WHERE origin = {origin};\", engine), inputs=origin, x=\"time\", y=\"price\")\n```\n\n", "heading1": "SQLite", "source_page_url": "https://gradio.app/guides/connecting-to-a-database", "source_page_title": "Data Science And Plots - Connecting To A Database Guide"}, {"text": "If you're using a different database format, all you have to do is swap out the engine, e.g.\n\n```python\nengine = create_engine('postgresql://username:password@host:port/database_name')\n```\n\n```python\nengine = create_engine('mysql://username:password@host:port/database_name')\n```\n\n```python\nengine = create_engine('oracle://username:password@host:port/database_name')\n```", "heading1": "Postgres, mySQL, and other databases", "source_page_url": "https://gradio.app/guides/connecting-to-a-database", "source_page_title": "Data Science And Plots - Connecting To A Database Guide"}, {"text": "Time plots need a datetime column on the x-axis. Here's a simple example with some flight data:\n\n$code_plot_guide_temporal\n$demo_plot_guide_temporal\n\n", "heading1": "Creating a Plot with a pd.Dataframe", "source_page_url": "https://gradio.app/guides/time-plots", "source_page_title": "Data Science And Plots - Time Plots Guide"}, {"text": "You may wish to bin data by time buckets. Use `x_bin` to do so, using a string suffix with \"s\", \"m\", \"h\" or \"d\", such as \"15m\" or \"1d\".\n\n$code_plot_guide_aggregate_temporal\n$demo_plot_guide_aggregate_temporal\n\n", "heading1": "Aggregating by Time", "source_page_url": "https://gradio.app/guides/time-plots", "source_page_title": "Data Science And Plots - Time Plots Guide"}, {"text": "You can use `gr.DateTime` to accept input datetime data. This works well with plots for defining the x-axis range for the data.\n\n$code_plot_guide_datetime\n$demo_plot_guide_datetime\n\nNote how `gr.DateTime` can accept a full datetime string, or a shorthand using `now - [0-9]+[smhd]` format to refer to a past time.\n\nYou will often have many time plots in which case you'd like to keep the x-axes in sync. The `DateTimeRange` custom component keeps a set of datetime plots in sync, and also uses the `.select` listener of plots to allow you to zoom into plots while keeping plots in sync. \n\nBecause it is a custom component, you first need to `pip install gradio_datetimerange`. Then run the following:\n\n$code_plot_guide_datetimerange\n$demo_plot_guide_datetimerange\n\nTry zooming around in the plots and see how DateTimeRange updates. All the plots updates their `x_lim` in sync. You also have a \"Back\" link in the component to allow you to quickly zoom in and out.\n\n", "heading1": "DateTime Components", "source_page_url": "https://gradio.app/guides/time-plots", "source_page_title": "Data Science And Plots - Time Plots Guide"}, {"text": "In many cases, you're working with live, realtime date, not a static dataframe. In this case, you'd update the plot regularly with a `gr.Timer()`. Assuming there's a `get_data` method that gets the latest dataframe:\n\n```python\nwith gr.Blocks() as demo:\n timer = gr.Timer(5)\n plot1 = gr.BarPlot(x=\"time\", y=\"price\")\n plot2 = gr.BarPlot(x=\"time\", y=\"price\", color=\"origin\")\n\n timer.tick(lambda: [get_data(), get_data()], outputs=[plot1, plot2])\n```\n\nYou can also use the `every` shorthand to attach a `Timer` to a component that has a function value:\n\n```python\nwith gr.Blocks() as demo:\n timer = gr.Timer(5)\n plot1 = gr.BarPlot(get_data, x=\"time\", y=\"price\", every=timer)\n plot2 = gr.BarPlot(get_data, x=\"time\", y=\"price\", color=\"origin\", every=timer)\n```\n\n\n", "heading1": "RealTime Data", "source_page_url": "https://gradio.app/guides/time-plots", "source_page_title": "Data Science And Plots - Time Plots Guide"}, {"text": "Plots accept a pandas Dataframe as their value. The plot also takes `x` and `y` which represent the names of the columns that represent the x and y axes respectively. Here's a simple example:\n\n$code_plot_guide_line\n$demo_plot_guide_line\n\nAll plots have the same API, so you could swap this out with a `gr.ScatterPlot`:\n\n$code_plot_guide_scatter\n$demo_plot_guide_scatter\n\nThe y axis column in the dataframe should have a numeric type, but the x axis column can be anything from strings, numbers, categories, or datetimes.\n\n$code_plot_guide_scatter_nominal\n$demo_plot_guide_scatter_nominal\n\n", "heading1": "Creating a Plot with a pd.Dataframe", "source_page_url": "https://gradio.app/guides/creating-plots", "source_page_title": "Data Science And Plots - Creating Plots Guide"}, {"text": "You can break out your plot into series using the `color` argument.\n\n$code_plot_guide_series_nominal\n$demo_plot_guide_series_nominal\n\nIf you wish to assign series specific colors, use the `color_map` arg, e.g. `gr.ScatterPlot(..., color_map={'white': 'FF9988', 'asian': '88EEAA', 'black': '333388'})`\n\nThe color column can be numeric type as well.\n\n$code_plot_guide_series_quantitative\n$demo_plot_guide_series_quantitative\n\n", "heading1": "Breaking out Series by Color", "source_page_url": "https://gradio.app/guides/creating-plots", "source_page_title": "Data Science And Plots - Creating Plots Guide"}, {"text": "You can aggregate values into groups using the `x_bin` and `y_aggregate` arguments. If your x-axis is numeric, providing an `x_bin` will create a histogram-style binning:\n\n$code_plot_guide_aggregate_quantitative\n$demo_plot_guide_aggregate_quantitative\n\nIf your x-axis is a string type instead, they will act as the category bins automatically:\n\n$code_plot_guide_aggregate_nominal\n$demo_plot_guide_aggregate_nominal\n\n", "heading1": "Aggregating Values", "source_page_url": "https://gradio.app/guides/creating-plots", "source_page_title": "Data Science And Plots - Creating Plots Guide"}, {"text": "You can use the `.select` listener to select regions of a plot. Click and drag on the plot below to select part of the plot.\n\n$code_plot_guide_selection\n$demo_plot_guide_selection\n\nYou can combine this and the `.double_click` listener to create some zoom in/out effects by changing `x_lim` which sets the bounds of the x-axis:\n\n$code_plot_guide_zoom\n$demo_plot_guide_zoom\n\nIf you had multiple plots with the same x column, your event listeners could target the x limits of all other plots so that the x-axes stay in sync.\n\n$code_plot_guide_zoom_sync\n$demo_plot_guide_zoom_sync\n\n", "heading1": "Selecting Regions", "source_page_url": "https://gradio.app/guides/creating-plots", "source_page_title": "Data Science And Plots - Creating Plots Guide"}, {"text": "Take a look how you can have an interactive dashboard where the plots are functions of other Components.\n\n$code_plot_guide_interactive\n$demo_plot_guide_interactive\n\nIt's that simple to filter and control the data presented in your visualization!", "heading1": "Making an Interactive Dashboard", "source_page_url": "https://gradio.app/guides/creating-plots", "source_page_title": "Data Science And Plots - Creating Plots Guide"}, {"text": "When a user closes their browser tab, Gradio will automatically delete any `gr.State` variables associated with that user session after 60 minutes. If the user connects again within those 60 minutes, no state will be deleted.\n\nYou can control the deletion behavior further with the following two parameters of `gr.State`:\n\n1. `delete_callback` - An arbitrary function that will be called when the variable is deleted. This function must take the state value as input. This function is useful for deleting variables from GPU memory.\n2. `time_to_live` - The number of seconds the state should be stored for after it is created or updated. This will delete variables before the session is closed, so it's useful for clearing state for potentially long running sessions.\n\n", "heading1": "Automatic deletion of `gr.State`", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Your Gradio application will save uploaded and generated files to a special directory called the cache directory. Gradio uses a hashing scheme to ensure that duplicate files are not saved to the cache but over time the size of the cache will grow (especially if your app goes viral \ud83d\ude09).\n\nGradio can periodically clean up the cache for you if you specify the `delete_cache` parameter of `gr.Blocks()`, `gr.Interface()`, or `gr.ChatInterface()`. \nThis parameter is a tuple of the form `[frequency, age]` both expressed in number of seconds.\nEvery `frequency` seconds, the temporary files created by this Blocks instance will be deleted if more than `age` seconds have passed since the file was created. \nFor example, setting this to (86400, 86400) will delete temporary files every day if they are older than a day old.\nAdditionally, the cache will be deleted entirely when the server restarts.\n\n", "heading1": "Automatic cache cleanup via `delete_cache`", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Additionally, Gradio now includes a `Blocks.unload()` event, allowing you to run arbitrary cleanup functions when users disconnect (this does not have a 60 minute delay).\nUnlike other gradio events, this event does not accept inputs or outptus.\nYou can think of the `unload` event as the opposite of the `load` event.\n\n", "heading1": "The `unload` event", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "The following demo uses all of these features. When a user visits the page, a special unique directory is created for that user.\nAs the user interacts with the app, images are saved to disk in that special directory.\nWhen the user closes the page, the images created in that session are deleted via the `unload` event.\nThe state and files in the cache are cleaned up automatically as well.\n\n$code_state_cleanup\n$demo_state_cleanup", "heading1": "Putting it all together", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "- **1. Static files**. You can designate static files or directories using the `gr.set_static_paths` function. Static files are not be copied to the Gradio cache (see below) and will be served directly from your computer. This can help save disk space and reduce the time your app takes to launch but be mindful of possible security implications as any static files are accessible to all useres of your Gradio app.\n\n- **2. Files in the `allowed_paths` parameter in `launch()`**. This parameter allows you to pass in a list of additional directories or exact filepaths you'd like to allow users to have access to. (By default, this parameter is an empty list).\n\n- **3. Files in Gradio's cache**. After you launch your Gradio app, Gradio copies certain files into a temporary cache and makes these files accessible to users. Let's unpack this in more detail below.\n\n\n", "heading1": "Files Gradio allows users to access", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "First, it's important to understand why Gradio has a cache at all. Gradio copies files to a cache directory before returning them to the frontend. This prevents files from being overwritten by one user while they are still needed by another user of your application. For example, if your prediction function returns a video file, then Gradio will move that video to the cache after your prediction function runs and returns a URL the frontend can use to show the video. Any file in the cache is available via URL to all users of your running application.\n\nTip: You can customize the location of the cache by setting the `GRADIO_TEMP_DIR` environment variable to an absolute path, such as `/home/usr/scripts/project/temp/`. \n\nFiles Gradio moves to the cache\n\nGradio moves three kinds of files into the cache\n\n1. Files specified by the developer before runtime, e.g. cached examples, default values of components, or files passed into parameters such as the `avatar_images` of `gr.Chatbot`\n\n2. File paths returned by a prediction function in your Gradio application, if they ALSO meet one of the conditions below:\n\n* It is in the `allowed_paths` parameter of the `Blocks.launch` method.\n* It is in the current working directory of the python interpreter.\n* It is in the temp directory obtained by `tempfile.gettempdir()`.\n\n**Note:** files in the current working directory whose name starts with a period (`.`) will not be moved to the cache, even if they are returned from a prediction function, since they often contain sensitive information. \n\nIf none of these criteria are met, the prediction function that is returning that file will raise an exception instead of moving the file to cache. Gradio performs this check so that arbitrary files on your machine cannot be accessed.\n\n3. Files uploaded by a user to your Gradio app (e.g. through the `File` or `Image` input components).\n\nTip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` p", "heading1": "The Gradio cache", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "d by a user to your Gradio app (e.g. through the `File` or `Image` input components).\n\nTip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` parameter.\n\n", "heading1": "The Gradio cache", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "While running, Gradio apps will NOT ALLOW users to access:\n\n- **Files that you explicitly block via the `blocked_paths` parameter in `launch()`**. You can pass in a list of additional directories or exact filepaths to the `blocked_paths` parameter in `launch()`. This parameter takes precedence over the files that Gradio exposes by default, or by the `allowed_paths` parameter or the `gr.set_static_paths` function.\n\n- **Any other paths on the host machine**. Users should NOT be able to access other arbitrary paths on the host.\n\n", "heading1": "The files Gradio will not allow others to access", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Sharing your Gradio application will also allow users to upload files to your computer or server. You can set a maximum file size for uploads to prevent abuse and to preserve disk space. You can do this with the `max_file_size` parameter of `.launch`. For example, the following two code snippets limit file uploads to 5 megabytes per file.\n\n```python\nimport gradio as gr\n\ndemo = gr.Interface(lambda x: x, \"image\", \"image\")\n\ndemo.launch(max_file_size=\"5mb\")\nor\ndemo.launch(max_file_size=5 * gr.FileSize.MB)\n```\n\n", "heading1": "Uploading Files", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "* Set a `max_file_size` for your application.\n* Do not return arbitrary user input from a function that is connected to a file-based output component (`gr.Image`, `gr.File`, etc.). For example, the following interface would allow anyone to move an arbitrary file in your local directory to the cache: `gr.Interface(lambda s: s, \"text\", \"file\")`. This is because the user input is treated as an arbitrary file path. \n* Make `allowed_paths` as small as possible. If a path in `allowed_paths` is a directory, any file within that directory can be accessed. Make sure the entires of `allowed_paths` only contains files related to your application.\n* Run your gradio application from the same directory the application file is located in. This will narrow the scope of files Gradio will be allowed to move into the cache. For example, prefer `python app.py` to `python Users/sources/project/app.py`.\n\n\n", "heading1": "Best Practices", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Both `gr.set_static_paths` and the `allowed_paths` parameter in launch expect absolute paths. Below is a minimal example to display a local `.png` image file in an HTML block.\n\n```txt\n\u251c\u2500\u2500 assets\n\u2502 \u2514\u2500\u2500 logo.png\n\u2514\u2500\u2500 app.py\n```\nFor the example directory structure, `logo.png` and any other files in the `assets` folder can be accessed from your Gradio app in `app.py` as follows:\n\n```python\nfrom pathlib import Path\n\nimport gradio as gr\n\ngr.set_static_paths(paths=[Path.cwd().absolute()/\"assets\"])\n\nwith gr.Blocks() as demo:\n gr.HTML(\"\")\n\ndemo.launch()\n```\n", "heading1": "Example: Accessing local files", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "You can initialize the `I18n` class with multiple language dictionaries to add custom translations:\n\n```python\nimport gradio as gr\n\nCreate an I18n instance with translations for multiple languages\ni18n = gr.I18n(\n en={\"greeting\": \"Hello, welcome to my app!\", \"submit\": \"Submit\"},\n es={\"greeting\": \"\u00a1Hola, bienvenido a mi aplicaci\u00f3n!\", \"submit\": \"Enviar\"},\n fr={\"greeting\": \"Bonjour, bienvenue dans mon application!\", \"submit\": \"Soumettre\"}\n)\n\nwith gr.Blocks() as demo:\n Use the i18n method to translate the greeting\n gr.Markdown(i18n(\"greeting\"))\n with gr.Row():\n input_text = gr.Textbox(label=\"Input\")\n output_text = gr.Textbox(label=\"Output\")\n \n submit_btn = gr.Button(i18n(\"submit\"))\n\nPass the i18n instance to the launch method\ndemo.launch(i18n=i18n)\n```\n\n", "heading1": "Setting Up Translations", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "When you use the `i18n` instance with a translation key, Gradio will show the corresponding translation to users based on their browser's language settings or the language they've selected in your app.\n\nIf a translation isn't available for the user's locale, the system will fall back to English (if available) or display the key itself.\n\n", "heading1": "How It Works", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "Locale codes should follow the BCP 47 format (e.g., 'en', 'en-US', 'zh-CN'). The `I18n` class will warn you if you use an invalid locale code.\n\n", "heading1": "Valid Locale Codes", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "The following component properties typically support internationalization:\n\n- `description`\n- `info`\n- `title`\n- `placeholder`\n- `value`\n- `label`\n\nNote that support may vary depending on the component, and some properties might have exceptions where internationalization is not applicable. You can check this by referring to the typehint for the parameter and if it contains `I18nData`, then it supports internationalization.", "heading1": "Supported Component Properties", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "By default, Gradio automatically generates a navigation bar for multipage apps that displays all your pages with \"Home\" as the title for the main page. You can customize the navbar behavior using the `gr.Navbar` component.\n\nPer-Page Navbar Configuration\n\nYou can have different navbar configurations for each page of your app:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n Navbar for the main page\n navbar = gr.Navbar(\n visible=True,\n main_page_name=\"Dashboard\",\n value=[(\"About\", \"https://example.com/about\")]\n )\n \n gr.Textbox(label=\"Main page content\")\n\nwith demo.route(\"Settings\"):\n Different navbar for the Settings page\n navbar = gr.Navbar(\n visible=True,\n main_page_name=\"Home\",\n value=[(\"Documentation\", \"https://docs.example.com\")]\n )\n gr.Textbox(label=\"Settings page\")\n\ndemo.launch()\n```\n\n\n**Important Notes:**\n- You can have one `gr.Navbar` component per page. Each page's navbar configuration is independent.\n- The `main_page_name` parameter customizes the title of the home page link in the navbar.\n- The `value` parameter allows you to add additional links to the navbar, which can be internal pages or external URLs.\n- If no `gr.Navbar` component is present on a page, the default navbar behavior is used (visible with \"Home\" as the home page title).\n- You can update the navbar properties using standard Gradio event handling, just like with any other component.\n\nHere's an example that demonstrates the last point:\n\n$code_navbar_customization\n\n", "heading1": "Customizing the Navbar", "source_page_url": "https://gradio.app/guides/multipage-apps", "source_page_title": "Additional Features - Multipage Apps Guide"}, {"text": "Gradio demos can be easily shared publicly by setting `share=True` in the `launch()` method. Like this:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n\ndemo.launch(share=True) Share your demo with just 1 extra parameter \ud83d\ude80\n```\n\nThis generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on), you don't have to worry about any packaging any dependencies.\n\n![sharing](https://github.com/gradio-app/gradio/blob/main/guides/assets/sharing.svg?raw=true)\n\n\nA share link usually looks something like this: **https://07ff8706ab.gradio.live**. Although the link is served through the Gradio Share Servers, these servers are only a proxy for your local server, and do not store any data sent through your app. Share links expire after 1 week. (it is [also possible to set up your own Share Server](https://github.com/huggingface/frp/) on your own cloud server to overcome this restriction.)\n\nTip: Keep in mind that share links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. Or you can [add authentication to your Gradio app](authentication) as discussed below.\n\nNote that by default, `share=False`, which means that your server is only running locally. (This is the default, except in Google Colab notebooks, where share links are automatically created). As an alternative to using share links, you can use use [SSH port-forwarding](https://www.ssh.com/ssh/tunneling/example) to share your local server with specific users.\n\n\n", "heading1": "Sharing Demos", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "If you'd like to have a permanent link to your Gradio demo on the internet, use Hugging Face Spaces. [Hugging Face Spaces](http://huggingface.co/spaces/) provides the infrastructure to permanently host your machine learning model for free!\n\nAfter you have [created a free Hugging Face account](https://huggingface.co/join), you have two methods to deploy your Gradio app to Hugging Face Spaces:\n\n1. From terminal: run `gradio deploy` in your app directory. The CLI will gather some basic metadata, upload all the files in the current directory (respecting any `.gitignore` file that may be present in the root of the directory), and then launch your app on Spaces. To update your Space, you can re-run this command or enable the Github Actions option in the CLI to automatically update the Spaces on `git push`.\n\n2. From your browser: Drag and drop a folder containing your Gradio model and all related files [here](https://huggingface.co/new-space). See [this guide how to host on Hugging Face Spaces](https://huggingface.co/blog/gradio-spaces) for more information, or watch the embedded video:\n\n\n\n", "heading1": "Hosting on HF Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "You can add a button to your Gradio app that creates a unique URL you can use to share your app and all components **as they currently are** with others. This is useful for sharing unique and interesting generations from your application , or for saving a snapshot of your app at a particular point in time.\n\nTo add a deep link button to your app, place the `gr.DeepLinkButton` component anywhere in your app.\nFor the URL to be accessible to others, your app must be available at a public URL. So be sure to host your app like Hugging Face Spaces or use the `share=True` parameter when launching your app.\n\nLet's see an example of how this works. Here's a simple Gradio chat ap that uses the `gr.DeepLinkButton` component. After a couple of messages, click the deep link button and paste it into a new browser tab to see the app as it is at that point in time.\n\n$code_deep_link\n$demo_deep_link\n\n\n", "heading1": "Sharing Deep Links", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Once you have hosted your app on Hugging Face Spaces (or on your own server), you may want to embed the demo on a different website, such as your blog or your portfolio. Embedding an interactive demo allows people to try out the machine learning model that you have built, without needing to download or install anything \u2014 right in their browser! The best part is that you can embed interactive demos even in static websites, such as GitHub pages.\n\nThere are two ways to embed your Gradio demos. You can find quick links to both options directly on the Hugging Face Space page, in the \"Embed this Space\" dropdown option:\n\n![Embed this Space dropdown option](https://github.com/gradio-app/gradio/blob/main/guides/assets/embed_this_space.png?raw=true)\n\nEmbedding with Web Components\n\nWeb components typically offer a better experience to users than IFrames. Web components load lazily, meaning that they won't slow down the loading time of your website, and they automatically adjust their height based on the size of the Gradio app.\n\nTo embed with Web Components:\n\n1. Import the gradio JS library into into your site by adding the script below in your site (replace {GRADIO_VERSION} in the URL with the library version of Gradio you are using).\n\n```html\n\n```\n\n2. Add\n\n```html\n\n```\n\nelement where you want to place the app. Set the `src=` attribute to your Space's embed URL, which you can find in the \"Embed this Space\" button. For example:\n\n```html\n\n```\n\n\n\nYou can see examples of h", "heading1": "Embedding Hosted Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "=> {\n let v = obj.info.version;\n content = document.querySelector('.prose');\n content.innerHTML = content.innerHTML.replaceAll(\"{GRADIO_VERSION}\", v);\n});\n\n\nYou can see examples of how web components look on the Gradio landing page.\n\nYou can also customize the appearance and behavior of your web component with attributes that you pass into the `` tag:\n\n- `src`: as we've seen, the `src` attributes links to the URL of the hosted Gradio demo that you would like to embed\n- `space`: an optional shorthand if your Gradio demo is hosted on Hugging Face Space. Accepts a `username/space_name` instead of a full URL. Example: `gradio/Echocardiogram-Segmentation`. If this attribute attribute is provided, then `src` does not need to be provided.\n- `control_page_title`: a boolean designating whether the html title of the page should be set to the title of the Gradio app (by default `\"false\"`)\n- `initial_height`: the initial height of the web component while it is loading the Gradio app, (by default `\"300px\"`). Note that the final height is set based on the size of the Gradio app.\n- `container`: whether to show the border frame and information about where the Space is hosted (by default `\"true\"`)\n- `info`: whether to show just the information about where the Space is hosted underneath the embedded app (by default `\"true\"`)\n- `autoscroll`: whether to autoscroll to the output when prediction has finished (by default `\"false\"`)\n- `eager`: whether to load the Gradio app as soon as the page loads (by default `\"false\"`)\n- `theme_mode`: whether to use the `dark`, `light`, or default `system` theme mode (by default `\"system\"`)\n- `render`: an event that is triggered once the embedded space has finished rendering.\n\nHere's an example of how to use these attributes to create a Gradio app that does not lazy load and has an initial height of 0px.\n\n```html\n\n```\n\nHere's another example of how to use the `render` event. An event listener is used to capture the `render` event and will call the `handleLoadComplete()` function once rendering is complete.\n\n```html\n\n```\n\n_Note: While Gradio's CSS will never impact the embedding page, the embedding page can affect the style of the embedded Gradio app. Make sure that any CSS in the parent page isn't so general that it could also apply to the embedded Gradio app and cause the styling to break. Element selectors such as `header { ... }` and `footer { ... }` will be the most likely to cause issues._\n\nEmbedding with IFrames\n\nTo embed with IFrames instead (if you cannot add javascript to your website, for example), add this element:\n\n```html\n\n```\n\nAgain, you can find the `src=` attribute to your Space's embed URL, which you can find in the \"Embed this Space\" button.\n\nNote: if you use IFrames, you'll probably want to add a fixed `height` attribute and set `style=\"border:0;\"` to remove the border. In addition, if your app requires permissions such as access to the webcam or the microphone, you'll need to provide that as well using the `allow` attribute.\n\n", "heading1": "Embedding Hosted Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "You can use almost any Gradio app as an API! In the footer of a Gradio app [like this one](https://huggingface.co/spaces/gradio/hello_world), you'll see a \"Use via API\" link.\n\n![Use via API](https://github.com/gradio-app/gradio/blob/main/guides/assets/use_via_api.png?raw=true)\n\nThis is a page that lists the endpoints that can be used to query the Gradio app, via our supported clients: either [the Python client](https://gradio.app/guides/getting-started-with-the-python-client/), or [the JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). For each endpoint, Gradio automatically generates the parameters and their types, as well as example inputs, like this.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe endpoints are automatically created when you launch a Gradio application. If you are using Gradio `Blocks`, you can also name each event listener, such as\n\n```python\nbtn.click(add, [num1, num2], output, api_name=\"addition\")\n```\n\nThis will add and document the endpoint `/addition/` to the automatically generated API page. Read more about the [API page here](./view-api-page).\n\n", "heading1": "API Page", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "When a user makes a prediction to your app, you may need the underlying network request, in order to get the request headers (e.g. for advanced authentication), log the client's IP address, getting the query parameters, or for other reasons. Gradio supports this in a similar manner to FastAPI: simply add a function parameter whose type hint is `gr.Request` and Gradio will pass in the network request as that parameter. Here is an example:\n\n```python\nimport gradio as gr\n\ndef echo(text, request: gr.Request):\n if request:\n print(\"Request headers dictionary:\", request.headers)\n print(\"IP address:\", request.client.host)\n print(\"Query parameters:\", dict(request.query_params))\n return text\n\nio = gr.Interface(echo, \"textbox\", \"textbox\").launch()\n```\n\nNote: if your function is called directly instead of through the UI (this happens, for\nexample, when examples are cached, or when the Gradio app is called via API), then `request` will be `None`.\nYou should handle this case explicitly to ensure that your app does not throw any errors. That is why\nwe have the explicit check `if request`.\n\n", "heading1": "Accessing the Network Request Directly", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "In some cases, you might have an existing FastAPI app, and you'd like to add a path for a Gradio demo.\nYou can easily do this with `gradio.mount_gradio_app()`.\n\nHere's a complete example:\n\n$code_custom_path\n\nNote that this approach also allows you run your Gradio apps on custom paths (`http://localhost:8000/gradio` in the example above).\n\n\n", "heading1": "Mounting Within Another FastAPI App", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Password-protected app\n\nYou may wish to put an authentication page in front of your app to limit who can open your app. With the `auth=` keyword argument in the `launch()` method, you can provide a tuple with a username and password, or a list of acceptable username/password tuples; Here's an example that provides password-based authentication for a single user named \"admin\":\n\n```python\ndemo.launch(auth=(\"admin\", \"pass1234\"))\n```\n\nFor more complex authentication handling, you can even pass a function that takes a username and password as arguments, and returns `True` to allow access, `False` otherwise.\n\nHere's an example of a function that accepts any login where the username and password are the same:\n\n```python\ndef same_auth(username, password):\n return username == password\ndemo.launch(auth=same_auth)\n```\n\nIf you have multiple users, you may wish to customize the content that is shown depending on the user that is logged in. You can retrieve the logged in user by [accessing the network request directly](accessing-the-network-request-directly) as discussed above, and then reading the `.username` attribute of the request. Here's an example:\n\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as demo:\n m = gr.Markdown()\n demo.load(update_message, None, m)\n\ndemo.launch(auth=[(\"Abubakar\", \"Abubakar\"), (\"Ali\", \"Ali\")])\n```\n\nNote: For authentication to work properly, third party cookies must be enabled in your browser. This is not the case by default for Safari or for Chrome Incognito Mode.\n\nIf users visit the `/logout` page of your Gradio app, they will automatically be logged out and session cookies deleted. This allows you to add logout functionality to your Gradio app as well. Let's update the previous example to include a log out button:\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as ", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": " Let's update the previous example to include a log out button:\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as demo:\n m = gr.Markdown()\n logout_button = gr.Button(\"Logout\", link=\"/logout\")\n demo.load(update_message, None, m)\n\ndemo.launch(auth=[(\"Pete\", \"Pete\"), (\"Dawood\", \"Dawood\")])\n```\nBy default, visiting `/logout` logs the user out from **all sessions** (e.g. if they are logged in from multiple browsers or devices, all will be signed out). If you want to log out only from the **current session**, add the query parameter `all_session=false` (i.e. `/logout?all_session=false`).\n\nNote: Gradio's built-in authentication provides a straightforward and basic layer of access control but does not offer robust security features for applications that require stringent access controls (e.g. multi-factor authentication, rate limiting, or automatic lockout policies).\n\nOAuth (Login via Hugging Face)\n\nGradio natively supports OAuth login via Hugging Face. In other words, you can easily add a _\"Sign in with Hugging Face\"_ button to your demo, which allows you to get a user's HF username as well as other information from their HF profile. Check out [this Space](https://huggingface.co/spaces/Wauplin/gradio-oauth-demo) for a live demo.\n\nTo enable OAuth, you must set `hf_oauth: true` as a Space metadata in your README.md file. This will register your Space\nas an OAuth application on Hugging Face. Next, you can use `gr.LoginButton` to add a login button to\nyour Gradio app. Once a user is logged in with their HF account, you can retrieve their profile by adding a parameter of type\n`gr.OAuthProfile` to any Gradio function. The user profile will be automatically injected as a parameter value. If you want\nto perform actions on behalf of the user (e.g. list user's private repos, create repo, etc.), you can retrieve the user\ntoken by adding a parameter of type `gr.OAuthToken`. You must def", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "e. If you want\nto perform actions on behalf of the user (e.g. list user's private repos, create repo, etc.), you can retrieve the user\ntoken by adding a parameter of type `gr.OAuthToken`. You must define which scopes you will use in your Space metadata\n(see [documentation](https://huggingface.co/docs/hub/spaces-oauthscopes) for more details).\n\nHere is a short example:\n\n$code_login_with_huggingface\n\nWhen the user clicks on the login button, they get redirected in a new page to authorize your Space.\n\n
\n\n
\n\nUsers can revoke access to their profile at any time in their [settings](https://huggingface.co/settings/connected-applications).\n\nAs seen above, OAuth features are available only when your app runs in a Space. However, you often need to test your app\nlocally before deploying it. To test OAuth features locally, your machine must be logged in to Hugging Face. Please run `huggingface-cli login` or set `HF_TOKEN` as environment variable with one of your access token. You can generate a new token in your settings page (https://huggingface.co/settings/tokens). Then, clicking on the `gr.LoginButton` will log in to your local Hugging Face profile, allowing you to debug your app with your Hugging Face account before deploying it to a Space.\n\n**Security Note**: It is important to note that adding a `gr.LoginButton` does not restrict users from using your app, in the same way that adding [username-password authentication](/guides/sharing-your-apppassword-protected-app) does. This means that users of your app who have not logged in with Hugging Face can still access and run events in your Gradio app -- the difference is that the `gr.OAuthProfile` or `gr.OAuthToken` will be `None` in the corresponding functions.\n\n\nOAuth (with external providers)\n\nIt is also possible to authenticate with external OAuth pr", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "erence is that the `gr.OAuthProfile` or `gr.OAuthToken` will be `None` in the corresponding functions.\n\n\nOAuth (with external providers)\n\nIt is also possible to authenticate with external OAuth providers (e.g. Google OAuth) in your Gradio apps. To do this, first mount your Gradio app within a FastAPI app ([as discussed above](mounting-within-another-fast-api-app)). Then, you must write an *authentication function*, which gets the user's username from the OAuth provider and returns it. This function should be passed to the `auth_dependency` parameter in `gr.mount_gradio_app`.\n\nSimilar to [FastAPI dependency functions](https://fastapi.tiangolo.com/tutorial/dependencies/), the function specified by `auth_dependency` will run before any Gradio-related route in your FastAPI app. The function should accept a single parameter: the FastAPI `Request` and return either a string (representing a user's username) or `None`. If a string is returned, the user will be able to access the Gradio-related routes in your FastAPI app.\n\nFirst, let's show a simplistic example to illustrate the `auth_dependency` parameter:\n\n```python\nfrom fastapi import FastAPI, Request\nimport gradio as gr\n\napp = FastAPI()\n\ndef get_user(request: Request):\n return request.headers.get(\"user\")\n\ndemo = gr.Interface(lambda s: f\"Hello {s}!\", \"textbox\", \"textbox\")\n\napp = gr.mount_gradio_app(app, demo, path=\"/demo\", auth_dependency=get_user)\n\nif __name__ == '__main__':\n uvicorn.run(app)\n```\n\nIn this example, only requests that include a \"user\" header will be allowed to access the Gradio app. Of course, this does not add much security, since any user can add this header in their request.\n\nHere's a more complete example showing how to add Google OAuth to a Gradio app (assuming you've already created OAuth Credentials on the [Google Developer Console](https://console.cloud.google.com/project)):\n\n```python\nimport os\nfrom authlib.integrations.starlette_client import OAuth, OAuthError\nfrom fastapi import FastA", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "entials on the [Google Developer Console](https://console.cloud.google.com/project)):\n\n```python\nimport os\nfrom authlib.integrations.starlette_client import OAuth, OAuthError\nfrom fastapi import FastAPI, Depends, Request\nfrom starlette.config import Config\nfrom starlette.responses import RedirectResponse\nfrom starlette.middleware.sessions import SessionMiddleware\nimport uvicorn\nimport gradio as gr\n\napp = FastAPI()\n\nReplace these with your own OAuth settings\nGOOGLE_CLIENT_ID = \"...\"\nGOOGLE_CLIENT_SECRET = \"...\"\nSECRET_KEY = \"...\"\n\nconfig_data = {'GOOGLE_CLIENT_ID': GOOGLE_CLIENT_ID, 'GOOGLE_CLIENT_SECRET': GOOGLE_CLIENT_SECRET}\nstarlette_config = Config(environ=config_data)\noauth = OAuth(starlette_config)\noauth.register(\n name='google',\n server_metadata_url='https://accounts.google.com/.well-known/openid-configuration',\n client_kwargs={'scope': 'openid email profile'},\n)\n\nSECRET_KEY = os.environ.get('SECRET_KEY') or \"a_very_secret_key\"\napp.add_middleware(SessionMiddleware, secret_key=SECRET_KEY)\n\nDependency to get the current user\ndef get_user(request: Request):\n user = request.session.get('user')\n if user:\n return user['name']\n return None\n\n@app.get('/')\ndef public(user: dict = Depends(get_user)):\n if user:\n return RedirectResponse(url='/gradio')\n else:\n return RedirectResponse(url='/login-demo')\n\n@app.route('/logout')\nasync def logout(request: Request):\n request.session.pop('user', None)\n return RedirectResponse(url='/')\n\n@app.route('/login')\nasync def login(request: Request):\n redirect_uri = request.url_for('auth')\n If your app is running on https, you should ensure that the\n `redirect_uri` is https, e.g. uncomment the following lines:\n \n from urllib.parse import urlparse, urlunparse\n redirect_uri = urlunparse(urlparse(str(redirect_uri))._replace(scheme='https'))\n return await oauth.google.authorize_redirect(request, redirect_uri)\n\n@app.route('/auth')\nasync def auth(request: Reque", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "direct_uri = urlunparse(urlparse(str(redirect_uri))._replace(scheme='https'))\n return await oauth.google.authorize_redirect(request, redirect_uri)\n\n@app.route('/auth')\nasync def auth(request: Request):\n try:\n access_token = await oauth.google.authorize_access_token(request)\n except OAuthError:\n return RedirectResponse(url='/')\n request.session['user'] = dict(access_token)[\"userinfo\"]\n return RedirectResponse(url='/')\n\nwith gr.Blocks() as login_demo:\n gr.Button(\"Login\", link=\"/login\")\n\napp = gr.mount_gradio_app(app, login_demo, path=\"/login-demo\")\n\ndef greet(request: gr.Request):\n return f\"Welcome to Gradio, {request.username}\"\n\nwith gr.Blocks() as main_demo:\n m = gr.Markdown(\"Welcome to Gradio!\")\n gr.Button(\"Logout\", link=\"/logout\")\n main_demo.load(greet, None, m)\n\napp = gr.mount_gradio_app(app, main_demo, path=\"/gradio\", auth_dependency=get_user)\n\nif __name__ == '__main__':\n uvicorn.run(app)\n```\n\nThere are actually two separate Gradio apps in this example! One that simply displays a log in button (this demo is accessible to any user), while the other main demo is only accessible to users that are logged in. You can try this example out on [this Space](https://huggingface.co/spaces/gradio/oauth-example).\n\n", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Gradio apps can function as MCP (Model Context Protocol) servers, allowing LLMs to use your app's functions as tools. By simply setting `mcp_server=True` in the `.launch()` method, Gradio automatically converts your app's functions into MCP tools that can be called by MCP clients like Claude Desktop, Cursor, or Cline. The server exposes tools based on your function names, docstrings, and type hints, and can handle file uploads, authentication headers, and progress updates. You can also create MCP-only functions using `gr.api` and expose resources and prompts using decorators. For a comprehensive guide on building MCP servers with Gradio, see [Building an MCP Server with Gradio](https://www.gradio.app/guides/building-mcp-server-with-gradio).\n\n", "heading1": "MCP Servers", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "When publishing your app publicly, and making it available via API or via MCP server, you might want to set rate limits to prevent users from abusing your app. You can identify users using their IP address (using the `gr.Request` object [as discussed above](accessing-the-network-request-directly)) or, if they are logged in via Hugging Face OAuth, using their username. To see a complete example of how to set rate limits, please see [this Gradio app](https://github.com/gradio-app/gradio/blob/main/demo/rate_limit/run.py).\n\n", "heading1": "Rate Limits", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "By default, Gradio collects certain analytics to help us better understand the usage of the `gradio` library. This includes the following information:\n\n* What environment the Gradio app is running on (e.g. Colab Notebook, Hugging Face Spaces)\n* What input/output components are being used in the Gradio app\n* Whether the Gradio app is utilizing certain advanced features, such as `auth` or `show_error`\n* The IP address which is used solely to measure the number of unique developers using Gradio\n* The version of Gradio that is running\n\nNo information is collected from _users_ of your Gradio app. If you'd like to disable analytics altogether, you can do so by setting the `analytics_enabled` parameter to `False` in `gr.Blocks`, `gr.Interface`, or `gr.ChatInterface`. Or, you can set the GRADIO_ANALYTICS_ENABLED environment variable to `\"False\"` to apply this to all Gradio apps created across your system.\n\n*Note*: this reflects the analytics policy as of `gradio>=4.32.0`.\n\n", "heading1": "Analytics", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "[Progressive Web Apps (PWAs)](https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps) are web applications that are regular web pages or websites, but can appear to the user like installable platform-specific applications.\n\nGradio apps can be easily served as PWAs by setting the `pwa=True` parameter in the `launch()` method. Here's an example:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n\ndemo.launch(pwa=True) Launch your app as a PWA\n```\n\nThis will generate a PWA that can be installed on your device. Here's how it looks:\n\n![Installing PWA](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/install-pwa.gif)\n\nWhen you specify `favicon_path` in the `launch()` method, the icon will be used as the app's icon. Here's an example:\n\n```python\ndemo.launch(pwa=True, favicon_path=\"./hf-logo.svg\") Use a custom icon for your PWA\n```\n\n![Custom PWA Icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/pwa-favicon.png)\n", "heading1": "Progressive Web App (PWA)", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Gradio can stream audio and video directly from your generator function.\nThis lets your user hear your audio or see your video nearly as soon as it's `yielded` by your function.\nAll you have to do is \n\n1. Set `streaming=True` in your `gr.Audio` or `gr.Video` output component.\n2. Write a python generator that yields the next \"chunk\" of audio or video.\n3. Set `autoplay=True` so that the media starts playing automatically.\n\nFor audio, the next \"chunk\" can be either an `.mp3` or `.wav` file or a `bytes` sequence of audio.\nFor video, the next \"chunk\" has to be either `.mp4` file or a file with `h.264` codec with a `.ts` extension.\nFor smooth playback, make sure chunks are consistent lengths and larger than 1 second.\n\nWe'll finish with some simple examples illustrating these points.\n\nStreaming Audio\n\n```python\nimport gradio as gr\nfrom time import sleep\n\ndef keep_repeating(audio_file):\n for _ in range(10):\n sleep(0.5)\n yield audio_file\n\ngr.Interface(keep_repeating,\n gr.Audio(sources=[\"microphone\"], type=\"filepath\"),\n gr.Audio(streaming=True, autoplay=True)\n).launch()\n```\n\nStreaming Video\n\n```python\nimport gradio as gr\nfrom time import sleep\n\ndef keep_repeating(video_file):\n for _ in range(10):\n sleep(0.5)\n yield video_file\n\ngr.Interface(keep_repeating,\n gr.Video(sources=[\"webcam\"], format=\"mp4\"),\n gr.Video(streaming=True, autoplay=True)\n).launch()\n```\n\n", "heading1": "Streaming Media", "source_page_url": "https://gradio.app/guides/streaming-outputs", "source_page_title": "Additional Features - Streaming Outputs Guide"}, {"text": "For an end-to-end example of streaming media, see the object detection from video [guide](/main/guides/object-detection-from-video) or the streaming AI-generated audio with [transformers](https://huggingface.co/docs/transformers/index) [guide](/main/guides/streaming-ai-generated-audio).", "heading1": "End-to-End Examples", "source_page_url": "https://gradio.app/guides/streaming-outputs", "source_page_title": "Additional Features - Streaming Outputs Guide"}, {"text": "Let's create a demo where a user can choose a filter to apply to their webcam stream. Users can choose from an edge-detection filter, a cartoon filter, or simply flipping the stream vertically.\n\n$code_streaming_filter\n$demo_streaming_filter\n\nYou will notice that if you change the filter value it will immediately take effect in the output stream. That is an important difference of stream events in comparison to other Gradio events. The input values of the stream can be changed while the stream is being processed. \n\nTip: We set the \"streaming\" parameter of the image output component to be \"True\". Doing so lets the server automatically convert our output images into base64 format, a format that is efficient for streaming.\n\n", "heading1": "A Realistic Image Demo", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "For some image streaming demos, like the one above, we don't need to display separate input and output components. Our app would look cleaner if we could just display the modified output stream.\n\nWe can do so by just specifying the input image component as the output of the stream event.\n\n$code_streaming_filter_unified\n$demo_streaming_filter_unified\n\n", "heading1": "Unified Image Demos", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "Your streaming function should be stateless. It should take the current input and return its corresponding output. However, there are cases where you may want to keep track of past inputs or outputs. For example, you may want to keep a buffer of the previous `k` inputs to improve the accuracy of your transcription demo. You can do this with Gradio's `gr.State()` component.\n\nLet's showcase this with a sample demo:\n\n```python\ndef transcribe_handler(current_audio, state, transcript):\n next_text = transcribe(current_audio, history=state)\n state.append(current_audio)\n state = state[-3:]\n return state, transcript + next_text\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n mic = gr.Audio(sources=\"microphone\")\n state = gr.State(value=[])\n with gr.Column():\n transcript = gr.Textbox(label=\"Transcript\")\n mic.stream(transcribe_handler, [mic, state, transcript], [state, transcript],\n time_limit=10, stream_every=1)\n\n\ndemo.launch()\n```\n\n", "heading1": "Keeping track of past inputs or outputs", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "For an end-to-end example of streaming from the webcam, see the object detection from webcam [guide](/main/guides/object-detection-from-webcam-with-webrtc).", "heading1": "End-to-End Examples", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "Client side functions are ideal for updating component properties (like visibility, placeholders, interactive state, or styling). \n\nHere's a basic example:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row() as row:\n btn = gr.Button(\"Hide this row\")\n \n This function runs in the browser without a server roundtrip\n btn.click(\n lambda: gr.Row(visible=False), \n None, \n row, \n js=True\n )\n\ndemo.launch()\n```\n\n\n", "heading1": "When to Use Client Side Functions", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "Client side functions have some important restrictions:\n* They can only update component properties (not values)\n* They cannot take any inputs\n\nHere are some functions that will work with `js=True`:\n\n```py\nSimple property updates\nlambda: gr.Textbox(lines=4)\n\nMultiple component updates\nlambda: [gr.Textbox(lines=4), gr.Button(interactive=False)]\n\nUsing gr.update() for property changes\nlambda: gr.update(visible=True, interactive=False)\n```\n\nWe are working to increase the space of functions that can be transpiled to JavaScript so that they can be run in the browser. [Follow the Groovy library for more info](https://github.com/abidlabs/groovy-transpiler).\n\n\n", "heading1": "Limitations", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "Here's a more complete example showing how client side functions can improve the user experience:\n\n$code_todo_list_js\n\n\n", "heading1": "Complete Example", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "When you set `js=True`, Gradio:\n\n1. Transpiles your Python function to JavaScript\n\n2. Runs the function directly in the browser\n\n3. Still sends the request to the server (for consistency and to handle any side effects)\n\nThis provides immediate visual feedback while ensuring your application state remains consistent.\n", "heading1": "Behind the Scenes", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "By default, each event listener has its own queue, which handles one request at a time. This can be configured via two arguments:\n\n- `concurrency_limit`: This sets the maximum number of concurrent executions for an event listener. By default, the limit is 1 unless configured otherwise in `Blocks.queue()`. You can also set it to `None` for no limit (i.e., an unlimited number of concurrent executions). For example:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n prompt = gr.Textbox()\n image = gr.Image()\n generate_btn = gr.Button(\"Generate Image\")\n generate_btn.click(image_gen, prompt, image, concurrency_limit=5)\n```\n\nIn the code above, up to 5 requests can be processed simultaneously for this event listener. Additional requests will be queued until a slot becomes available.\n\nIf you want to manage multiple event listeners using a shared queue, you can use the `concurrency_id` argument:\n\n- `concurrency_id`: This allows event listeners to share a queue by assigning them the same ID. For example, if your setup has only 2 GPUs but multiple functions require GPU access, you can create a shared queue for all those functions. Here's how that might look:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n prompt = gr.Textbox()\n image = gr.Image()\n generate_btn_1 = gr.Button(\"Generate Image via model 1\")\n generate_btn_2 = gr.Button(\"Generate Image via model 2\")\n generate_btn_3 = gr.Button(\"Generate Image via model 3\")\n generate_btn_1.click(image_gen_1, prompt, image, concurrency_limit=2, concurrency_id=\"gpu_queue\")\n generate_btn_2.click(image_gen_2, prompt, image, concurrency_id=\"gpu_queue\")\n generate_btn_3.click(image_gen_3, prompt, image, concurrency_id=\"gpu_queue\")\n```\n\nIn this example, all three event listeners share a queue identified by `\"gpu_queue\"`. The queue can handle up to 2 concurrent requests at a time, as defined by the `concurrency_limit`.\n\nNotes\n\n- To ensure unlimited concurrency for an event listener, se", "heading1": "Configuring the Queue", "source_page_url": "https://gradio.app/guides/queuing", "source_page_title": "Additional Features - Queuing Guide"}, {"text": " identified by `\"gpu_queue\"`. The queue can handle up to 2 concurrent requests at a time, as defined by the `concurrency_limit`.\n\nNotes\n\n- To ensure unlimited concurrency for an event listener, set `concurrency_limit=None`. This is useful if your function is calling e.g. an external API which handles the rate limiting of requests itself.\n- The default concurrency limit for all queues can be set globally using the `default_concurrency_limit` parameter in `Blocks.queue()`. \n\nThese configurations make it easy to manage the queuing behavior of your Gradio app.\n", "heading1": "Configuring the Queue", "source_page_url": "https://gradio.app/guides/queuing", "source_page_title": "Additional Features - Queuing Guide"}, {"text": "**API endpoint names**\n\nWhen you create a Gradio application, the API endpoint names are automatically generated based on the function names. You can change this by using the `api_name` parameter in `gr.Interface` or `gr.ChatInterface`. If you are using Gradio `Blocks`, you can name each event listener, like this:\n\n```python\nbtn.click(add, [num1, num2], output, api_name=\"addition\")\n```\n\n**Hiding API endpoints**\n\nWhen building a complex Gradio app, you might want to hide certain API endpoints from appearing on the view API page, e.g. if they correspond to functions that simply update the UI. You can set the `show_api` parameter to `False` in any `Blocks` event listener to achieve this, e.g. \n\n```python\nbtn.click(add, [num1, num2], output, show_api=False)\n```\n\n**Disabling API endpoints**\n\nHiding the API endpoint doesn't disable it. A user can still programmatically call the API endpoint if they know the name. If you want to disable an API endpoint altogether, set `api_name=False`, e.g. \n\n```python\nbtn.click(add, [num1, num2], output, api_name=False)\n```\n\nNote: setting an `api_name=False` also means that downstream apps will not be able to load your Gradio app using `gr.load()` as this function uses the Gradio API under the hood.\n\n**Adding API endpoints**\n\nYou can also add new API routes to your Gradio application that do not correspond to events in your UI.\n\nFor example, in this Gradio application, we add a new route that adds numbers and slices a list:\n\n```py\nimport gradio as gr\nwith gr.Blocks() as demo:\n with gr.Row():\n input = gr.Textbox()\n button = gr.Button(\"Submit\")\n output = gr.Textbox()\n def fn(a: int, b: int, c: list[str]) -> tuple[int, str]:\n return a + b, c[a:b]\n gr.api(fn, api_name=\"add_and_slice\")\n\n_, url, _ = demo.launch()\n```\n\nThis will create a new route `/add_and_slice` which will show up in the \"view API\" page. It can be programmatically called by the Python or JS Clients (discussed below) like this:\n\n```py\nfrom grad", "heading1": "Configuring the API Page", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "``\n\nThis will create a new route `/add_and_slice` which will show up in the \"view API\" page. It can be programmatically called by the Python or JS Clients (discussed below) like this:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(url)\nresult = client.predict(\n a=3,\n b=5,\n c=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n api_name=\"/add_and_slice\"\n)\nprint(result)\n```\n\n", "heading1": "Configuring the API Page", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "This API page not only lists all of the endpoints that can be used to query the Gradio app, but also shows the usage of both [the Gradio Python client](https://gradio.app/guides/getting-started-with-the-python-client/), and [the Gradio JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). \n\nFor each endpoint, Gradio automatically generates a complete code snippet with the parameters and their types, as well as example inputs, allowing you to immediately test an endpoint. Here's an example showing an image file input and `str` output:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-snippet.png)\n\n\n", "heading1": "The Clients", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "Instead of reading through the view API page, you can also use Gradio's built-in API recorder to generate the relevant code snippet. Simply click on the \"API Recorder\" button, use your Gradio app via the UI as you would normally, and then the API Recorder will generate the code using the Clients to recreate your all of your interactions programmatically.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/api-recorder.gif)\n\n", "heading1": "The API Recorder \ud83e\ude84", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "The API page also includes instructions on how to use the Gradio app as an Model Context Protocol (MCP) server, which is a standardized way to expose functions as tools so that they can be used by LLMs. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)\n\nFor the MCP sever, each tool, its description, and its parameters are listed, along with instructions on how to integrate with popular MCP Clients. Read more about Gradio's [MCP integration here](https://www.gradio.app/guides/building-mcp-server-with-gradio).\n\n", "heading1": "MCP Server", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "You can access the complete OpenAPI (formerly Swagger) specification of your Gradio app's API at the endpoint `/gradio_api/openapi.json`. The OpenAPI specification is a standardized, language-agnostic interface description for REST APIs that enables both humans and computers to discover and understand the capabilities of your service.\n", "heading1": "OpenAPI Specification", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "To add custom buttons to a component, pass a list of `gr.Button()` instances to the `buttons` parameter:\n\n```python\nimport gradio as gr\n\nrefresh_btn = gr.Button(\"Refresh\", variant=\"secondary\", size=\"sm\")\nclear_btn = gr.Button(\"Clear\", variant=\"secondary\", size=\"sm\")\n\ntextbox = gr.Textbox(\n value=\"Sample text\",\n label=\"Text Input\",\n buttons=[refresh_btn, clear_btn]\n)\n```\n\nYou can also mix built-in buttons (as strings) with custom buttons:\n\n```python\ncode = gr.Code(\n value=\"print('Hello')\",\n language=\"python\",\n buttons=[\"copy\", \"download\", refresh_btn, export_btn]\n)\n```\n\n", "heading1": "Basic Usage", "source_page_url": "https://gradio.app/guides/custom-buttons", "source_page_title": "Additional Features - Custom Buttons Guide"}, {"text": "Custom buttons work just like regular `gr.Button` components. You can connect them to Python functions or JavaScript functions using the `.click()` method:\n\nPython Functions\n\n```python\ndef refresh_data():\n import random\n return f\"Refreshed: {random.randint(1000, 9999)}\"\n\nrefresh_btn.click(refresh_data, outputs=textbox)\n```\n\nJavaScript Functions\n\n```python\nclear_btn.click(\n None,\n inputs=[],\n outputs=textbox,\n js=\"() => ''\"\n)\n```\n\nCombined Python and JavaScript\n\nYou can use the same button for both Python and JavaScript logic:\n\n```python\nalert_btn.click(\n None,\n inputs=textbox,\n outputs=[],\n js=\"(text) => { alert('Text: ' + text); return []; }\"\n)\n```\n\n", "heading1": "Connecting Button Events", "source_page_url": "https://gradio.app/guides/custom-buttons", "source_page_title": "Additional Features - Custom Buttons Guide"}, {"text": "Here's a complete example showing custom buttons with both Python and JavaScript functions:\n\n$code_textbox_custom_buttons\n\n\n", "heading1": "Complete Example", "source_page_url": "https://gradio.app/guides/custom-buttons", "source_page_title": "Additional Features - Custom Buttons Guide"}, {"text": "- Custom buttons appear in the component's toolbar, typically in the top-right corner\n- Only the `value` of the Button is used, other attributes like `icon` are not used.\n- Buttons are rendered in the order they appear in the `buttons` list\n- Built-in buttons (like \"copy\", \"download\") can be hidden by omitting them from the list\n- Custom buttons work with component events in the same way as as regular buttons\n", "heading1": "Notes", "source_page_url": "https://gradio.app/guides/custom-buttons", "source_page_title": "Additional Features - Custom Buttons Guide"}, {"text": "1. `GRADIO_SERVER_PORT`\n\n- **Description**: Specifies the port on which the Gradio app will run.\n- **Default**: `7860`\n- **Example**:\n ```bash\n export GRADIO_SERVER_PORT=8000\n ```\n\n2. `GRADIO_SERVER_NAME`\n\n- **Description**: Defines the host name for the Gradio server. To make Gradio accessible from any IP address, set this to `\"0.0.0.0\"`\n- **Default**: `\"127.0.0.1\"` \n- **Example**:\n ```bash\n export GRADIO_SERVER_NAME=\"0.0.0.0\"\n ```\n\n3. `GRADIO_NUM_PORTS`\n\n- **Description**: Defines the number of ports to try when starting the Gradio server.\n- **Default**: `100`\n- **Example**:\n ```bash\n export GRADIO_NUM_PORTS=200\n ```\n\n4. `GRADIO_ANALYTICS_ENABLED`\n\n- **Description**: Whether Gradio should provide \n- **Default**: `\"True\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_ANALYTICS_ENABLED=\"True\"\n ```\n\n5. `GRADIO_DEBUG`\n\n- **Description**: Enables or disables debug mode in Gradio. If debug mode is enabled, the main thread does not terminate allowing error messages to be printed in environments such as Google Colab.\n- **Default**: `0`\n- **Example**:\n ```sh\n export GRADIO_DEBUG=1\n ```\n\n6. `GRADIO_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag inputs/outputs in the Gradio interface. See [the Guide on flagging](/guides/using-flagging) for more details.\n- **Default**: `\"manual\"`\n- **Options**: `\"never\"`, `\"manual\"`, `\"auto\"`\n- **Example**:\n ```sh\n export GRADIO_FLAGGING_MODE=\"never\"\n ```\n\n7. `GRADIO_TEMP_DIR`\n\n- **Description**: Specifies the directory where temporary files created by Gradio are stored.\n- **Default**: System default temporary directory\n- **Example**:\n ```sh\n export GRADIO_TEMP_DIR=\"/path/to/temp\"\n ```\n\n8. `GRADIO_ROOT_PATH`\n\n- **Description**: Sets the root path for the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ROOT_PATH=", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "r the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ROOT_PATH=\"/myapp\"\n ```\n\n9. `GRADIO_SHARE`\n\n- **Description**: Enables or disables sharing the Gradio app.\n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_SHARE=\"True\"\n ```\n\n10. `GRADIO_ALLOWED_PATHS`\n\n- **Description**: Sets a list of complete filepaths or parent directories that gradio is allowed to serve. Must be absolute paths. Warning: if you provide directories, any files in these directories or their subdirectories are accessible to all users of your app. Multiple items can be specified by separating items with commas.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ALLOWED_PATHS=\"/mnt/sda1,/mnt/sda2\"\n ```\n\n11. `GRADIO_BLOCKED_PATHS`\n\n- **Description**: Sets a list of complete filepaths or parent directories that gradio is not allowed to serve (i.e. users of your app are not allowed to access). Must be absolute paths. Warning: takes precedence over `allowed_paths` and all other directories exposed by Gradio by default. Multiple items can be specified by separating items with commas.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_BLOCKED_PATHS=\"/users/x/gradio_app/admin,/users/x/gradio_app/keys\"\n ```\n\n12. `FORWARDED_ALLOW_IPS`\n\n- **Description**: This is not a Gradio-specific environment variable, but rather one used in server configurations, specifically `uvicorn` which is used by Gradio internally. This environment variable is useful when deploying applications behind a reverse proxy. It defines a list of IP addresses that are trusted to forward traffic to your application. When set, the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [objec", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": " the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [object's](https://www.gradio.app/docs/gradio/request) `client.host` property, it will correctly get the user's IP address instead of the IP address of the reverse proxy server. Note that only trusted IP addresses (i.e. the IP addresses of your reverse proxy servers) should be added, as any server with these IP addresses can modify the `X-Forwarded-For` header and spoof the client's IP address.\n- **Default**: `\"127.0.0.1\"`\n- **Example**:\n ```sh\n export FORWARDED_ALLOW_IPS=\"127.0.0.1,192.168.1.100\"\n ```\n\n13. `GRADIO_CACHE_EXAMPLES`\n\n- **Description**: Whether or not to cache examples by default in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()` when no explicit argument is passed for the `cache_examples` parameter. You can set this environment variable to either the string \"true\" or \"false\".\n- **Default**: `\"false\"`\n- **Example**:\n ```sh\n export GRADIO_CACHE_EXAMPLES=\"true\"\n ```\n\n\n14. `GRADIO_CACHE_MODE`\n\n- **Description**: How to cache examples. Only applies if `cache_examples` is set to `True` either via enviornment variable or by an explicit parameter, AND no no explicit argument is passed for the `cache_mode` parameter in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`. Can be set to either the strings \"lazy\" or \"eager.\" If \"lazy\", examples are cached after their first use for all users of the app. If \"eager\", all examples are cached at app launch.\n\n- **Default**: `\"eager\"`\n- **Example**:\n ```sh\n export GRADIO_CACHE_MODE=\"lazy\"\n ```\n\n\n15. `GRADIO_EXAMPLES_CACHE`\n\n- **Description**: If you set `cache_examples=True` in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "e()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your app's working directory. You can customize the location of cached example files created by Gradio by setting the environment variable `GRADIO_EXAMPLES_CACHE` to an absolute path or a path relative to your working directory.\n- **Default**: `\".gradio/cached_examples/\"`\n- **Example**:\n ```sh\n export GRADIO_EXAMPLES_CACHE=\"custom_cached_examples/\"\n ```\n\n\n16. `GRADIO_SSR_MODE`\n\n- **Description**: Controls whether server-side rendering (SSR) is enabled. When enabled, the initial HTML is rendered on the server rather than the client, which can improve initial page load performance and SEO.\n\n- **Default**: `\"False\"` (except on Hugging Face Spaces, where this environment variable sets it to `True`)\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_SSR_MODE=\"True\"\n ```\n\n17. `GRADIO_NODE_SERVER_NAME`\n\n- **Description**: Defines the host name for the Gradio node server. (Only applies if `ssr_mode` is set to `True`.)\n- **Default**: `GRADIO_SERVER_NAME` if it is set, otherwise `\"127.0.0.1\"`\n- **Example**:\n ```sh\n export GRADIO_NODE_SERVER_NAME=\"0.0.0.0\"\n ```\n\n18. `GRADIO_NODE_NUM_PORTS`\n\n- **Description**: Defines the number of ports to try when starting the Gradio node server. (Only applies if `ssr_mode` is set to `True`.)\n- **Default**: `100`\n- **Example**:\n ```sh\n export GRADIO_NODE_NUM_PORTS=200\n ```\n\n19. `GRADIO_RESET_EXAMPLES_CACHE`\n\n- **Description**: If set to \"True\", Gradio will delete and recreate the examples cache directory when the app starts instead of reusing the cached example if they already exist. \n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_RESET_EXAMPLES_CACHE=\"True\"\n ```\n\n20. `GRADIO_CHAT_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "e\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_RESET_EXAMPLES_CACHE=\"True\"\n ```\n\n20. `GRADIO_CHAT_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag messages in `gr.ChatInterface` applications. Similar to `GRADIO_FLAGGING_MODE` but specifically for chat interfaces.\n- **Default**: `\"never\"`\n- **Options**: `\"never\"`, `\"manual\"`\n- **Example**:\n ```sh\n export GRADIO_CHAT_FLAGGING_MODE=\"manual\"\n ```\n\n21. `GRADIO_WATCH_DIRS`\n\n- **Description**: Specifies directories to watch for file changes when running Gradio in development mode. When files in these directories change, the Gradio app will automatically reload. Multiple directories can be specified by separating them with commas. This is primarily used by the `gradio` CLI command for development workflows.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_WATCH_DIRS=\"/path/to/src,/path/to/templates\"\n ```\n\n22. `GRADIO_VIBE_MODE`\n\n- **Description**: Enables the Vibe editor mode, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. When enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use with extreme caution in production environments.\n- **Default**: `\"\"`\n- **Options**: Any non-empty string enables the mode\n- **Example**:\n ```sh\n export GRADIO_VIBE_MODE=\"1\"\n ```\n\n23. `GRADIO_MCP_SERVER`\n\n- **Description**: Enables the MCP (Model Context Protocol) server functionality in Gradio. When enabled, the Gradio app will be set up as an MCP server and documented functions will be added as MCP tools that can be used by LLMs. This allows LLMs to interact with your Gradio app's functionality through the MCP protocol.\n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_MCP_SERVER=\"True\"\n ```\n\n\n\n\n\n", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "*Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_MCP_SERVER=\"True\"\n ```\n\n\n\n\n\n", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "To set environment variables in your terminal, use the `export` command followed by the variable name and its value. For example:\n\n```sh\nexport GRADIO_SERVER_PORT=8000\n```\n\nIf you're using a `.env` file to manage your environment variables, you can add them like this:\n\n```sh\nGRADIO_SERVER_PORT=8000\nGRADIO_SERVER_NAME=\"localhost\"\n```\n\nThen, use a tool like `dotenv` to load these variables when running your application.\n\n\n\n", "heading1": "How to Set Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}]