You're looking at a specific version of this model. Jump to the model overview.
            
              
                
              
            
            Input schema
          
        The fields you can use to run this model with an API. If you don’t give a value for a field its default value will be used.
| Field | Type | Default value | Description | 
|---|---|---|---|
| image | 
           
            string
            
           
         | 
        
           
            Input image
           
         | 
      |
| question | 
           
            string
            
           
         | 
        
          
             
              What is shown in the image?
             
          
          
          
         | 
        
           
            Question to ask about this image
           
         | 
      
| context | 
           
            string
            
           
         | 
        
           
            Optional - previous questions and answers to be used as context for answering current question
           
         | 
      |
| caption | 
           
            boolean
            
           
         | 
        
          
             
              False
             
          
          
          
         | 
        
           
            Select if you want to generate image captions instead of asking questions
           
         | 
      
| system_prompt | 
           
            string
            
           
         | 
        
          
             
              A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
             
          
          
          
         | 
        
           
            System prompt
           
         | 
      
| max_new_tokens | 
           
            integer
            
           
         | 
        
          
             
              768
             
          
          
          
            Min: 1 Max: 2048  | 
        
           
            Maximum number of new tokens to generate
           
         | 
      
| num_beams | 
           
            integer
            
           
         | 
        
          
             
              1
             
          
          
          
            Min: 1 Max: 10  | 
        
           
            Number of beams for beam search
           
         | 
      
| temperature | 
           
            number
            
           
         | 
        
          
             
              1
             
          
          
          
            Min: 0.5 Max: 1  | 
        
           
            Temperature for use with nucleus sampling
           
         | 
      
| top_k | 
           
            integer
            
           
         | 
        
          
             
              50
             
          
          
          
            Min: 1  | 
        
           
            The number of highest probability vocabulary tokens to keep for top-k sampling
           
         | 
      
| top_p | 
           
            number
            
           
         | 
        
          
             
              1
             
          
          
          
            Max: 1  | 
        
           
            The cumulative probability threshold for top-p sampling
           
         | 
      
| repetition_penalty | 
           
            number
            
           
         | 
        
          
             
              1
             
          
          
          
         | 
        
           
            The parameter for repetition penalty
           
         | 
      
| length_penalty | 
           
            number
            
           
         | 
        
          
             
              1
             
          
          
          
         | 
        
           
            The parameter for length penalty
           
         | 
      
| do_sample | 
           
            boolean
            
           
         | 
        
          
             
              False
             
          
          
          
         | 
        
           
            Whether to use sampling or not
           
         | 
      
            
              
                
              
            
            Output schema
          
        The shape of the response you’ll get when you run this model with an API.
              Schema
            
            {'title': 'Output', 'type': 'string'}